Responsible AI / Panelist

Linda Leopold

H&M Group

Sweden

Linda Leopold is head of responsible AI and data at global fashion retailer H&M Group, where she leads the company’s work on sustainable and ethical artificial intelligence and data. Before joining H&M Group, she spent many years working in the media industry. Leopold was previously editor in chief at the critically acclaimed fashion and culture magazine Bon and is the author of two nonfiction books. She has been a columnist for Scandinavia’s biggest financial newspaper and has worked as an innovation strategist at the intersection of fashion and tech.

Voting History

Statement Response
Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree “What we have seen lately is rapid technological development and new, powerful tools released with hardly any prior public debate about the risks, societal implications, and new ethical challenges that arise. We all need to figure this out as we go. In that sense, I believe most responsible AI programs are unprepared. Many new generative AI tools are also publicly available and have a large range of possible applications. This means RAI programs might need to reach a new and much broader audience.

With that said, if you have a strong foundation in your responsible AI program, you should be somewhat prepared. The same ethical principles would still be applicable, even if they have to be complemented by more detailed guidance. Also, if the responsible AI program already has a strong focus on culture and communication, it will be easier to reach these new groups of people.”
RAI programs effectively address the risks of third-party AI tools. Disagree “Responsible AI programs should cover both internally built and third-party AI tools. The same ethical principles must apply, no matter where the AI system comes from. Ultimately, if something were to go wrong, it wouldn't matter to the person being negatively affected if the tool was built or bought. However, from my experience, responsible AI programs tend to focus primarily on AI tools developed by the organization itself.

Depending on what industry you are in, it could even be more important to address the risks from third-party tools, as they might be used in high-risk contexts, such as HR. Doing this requires a different set of methods than internally built AI systems do. It also means interacting with stakeholders in parts of the organization where the knowledge level about AI and the associated risks might be lower.

How do I define third-party AI tools? AI systems developed by an external vendor, including AI components as part of a product bought from an external vendor.”
Executives usually think of RAI as a technology issue. Neither agree nor disagree “My experience is that executives, as well as subject matter experts, often look at responsible AI through the lens of their own area of expertise (whether it is data science, human rights, sustainability, or something else), perhaps not seeing the full spectrum of it. The multidisciplinary nature of responsible AI is both the beauty and the complexity of the area. The wide range of topics it covers can be hard to grasp. But to fully embrace responsible AI, a multitude of perspectives is needed. Thinking of it as a technology issue that can be “fixed” only with technical tools is not sufficient.”
Mature RAI programs minimize AI system failures. Agree “For a responsible AI program to be considered mature, it should, in my opinion, be both comprehensive and widely adopted across an organization. It has to cover several dimensions of responsibility, including fairness, transparency, accountability, security, privacy, robustness, and human agency. And it has to be implemented on both a strategic (policy) and operational level (processes and tools should be fully deployed). If it ticks all these boxes, it should prevent a wide range of potential AI system failures, from security vulnerabilities to inaccurate predictions and amplification of biases.”
RAI constrains AI-related innovation. Strongly disagree “Quite the opposite. Firstly, as we define responsible AI at H&M Group, it encompasses using AI both as a tool for good as well as for preventing harm. This means innovation is an equally important part of responsible AI practices along with risk mitigation, as we see it. In our context, “doing good” mainly means exploring innovative AI solutions to tackle sustainability challenges, such as decreasing CO2 emissions and contributing to circular business models. Secondly, risk mitigation — “doing it right” — shouldn’t constrain innovation either. Ethically and responsibly designed AI solutions are also better AI solutions in the sense that they are more reliable, transparent, and created with the end user’s best interests in mind. And thirdly, having responsible AI policies and practices in place also creates a competitive advantage, as it reduces risk and increases trust, and builds stronger relationships with customers.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree “It depends on the industry and how you define and work with CSR in your organization. There is a close connection between responsible AI and efforts to promote social and environmental sustainability. The overall vision should be the same, but responsible AI also needs to be treated as a separate topic with its specific challenges and goals.”
Responsible AI should be a part of the top management agenda. Strongly agree “How to embrace digitalization and new technology in line with company values has to be a priority at the top management and board levels. And commitment to responsible AI should be clearly expressed by management, sending a strong message to the organization. Importantly, responsible AI must be seen as an integrated part of the AI strategy, not as an add-on or an afterthought.

With that said, being part of the top management agenda is not sufficient. Responsible AI practices and engagement also have to be built bottom-up, throughout the organization. In my opinion, the combination of these two approaches is key to succeed in creating a culture of responsible AI, making it a priority and top of mind across the company.”