Responsible AI / Panelist

Richard Benjamins

OdiselA

Spain

Richard Benjamins is cofounder and vice president of the Spanish Observatory for Ethical and Social Impacts of AI (OdiseIA). Until February 2024, he was the chief responsible AI officer at Telefónica and founder of its AI for Society and Environment area. Before that, he was the company’s chief AI and data strategist. He is trustee of the environmental nonprofit CDP Europe, and cofounder and former cochair of a UNESCO business council focused on AI ethics. He’s also involved in the advisory board of the Centre for Digital Culture at the Vatican and the World Economic Forum’s Resilient AI Governance and Regulation, and he is an external AI expert for the AI Observatory of the European Parliament. A founding editorial board member of AI and Ethics, Benjamins wrote The Myth of the Algorithm (Anaya Multimedia, 2020), A Data-Driven Company (LID Publishing, 2021), and The Algorithm and I (Anaya Multimedia, 2021).

Voting History

Statement Response
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “In the long run, it makes sense to require companies to disclose the use of AI in their products and services, just as today companies are required to disclose their emissions of CO2. Today, companies should be encouraged to disclose the use of AI if they foresee a potential negative impact on human rights. In the European Union, under the AI Act, any company is required to disclose (by 2026) the use of AI in high-risk applications. Apart from this regulatory requirement, there are business reasons for voluntary disclosure. Investors are increasingly looking for responsible AI governance practices in companies they want to invest in, just like they look at CO2 emission reduction plans. Moreover, customers increasingly consider responsible company behavior in their buying decisions. And employees increasingly consider responsibility as a key aspect in selecting and staying at companies.

This is a manifestation of a wider trend where ESG (environmental, social, and governance) is gaining traction in our societies and economies. The journey that the “E” (reporting of emissions) has undergone in the past 20 years will be repeated for the “S” (impact on human rights), but probably within five years. That is the challenge.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “The final text of the AI Act was approved in March 2024. There is still a gap between what the requirements mean and how they can be practically implemented. Moreover, implementing all the requirements of the AI Act is not simple: registering all AI systems, carrying out risk impact assessments, and applying the AI Act requirements corresponding to the identified risk. While it is true that, in terms of processes, much can be learned from the GDPR implementation, contentwise there are significant differences.

Most of the provisions of the AI Act will become applicable toward the end of the first semester of 2026 (except for prohibited AI systems — six months after publishing in the Official Journal of the European Union — and generative AI requirements, 12 months afterward). Two years is just about the minimum an organization needs to prepare for the AI Act, and many companies will struggle to achieve this, except those of UNESCO’s Business Council for the Ethics of AI.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Agree “With the upcoming regulations, especially in the European Union with the AI Act, many organizations are expanding their risk management capabilities to address AI-related risks. However, the rate of speed at which this is happening differs significantly by organization. Most have just heard about the new regulations and — from a compliance perspective — are catching up with regard to what this implies for their organization. More advanced AI organizations that already started their journey toward the responsible and ethical use of artificial intelligence some years ago must adapt their approach to also include requirements of the new regulations. Given the changing cultural mindset, the latter are in a better place than the former.

To be a bit controversial, I believe that in organizations that want to strengthen their risk management capabilities for regulatory motivations, this is usually driven by compliance departments to continue complying with the law and avoid fines. In organizations that already started their journey toward RAI several years ago, this activity usually involves technology led by AI advocates and is based on thought leadership.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree “Business communities are becoming more aware of the importance of responsible AI programs through publications and other media, but also through an increasing call for regulations and awareness of international recommendations.

But currently, this increased awareness is not yet converting into actions supported by adequate investments in RAI programs. For one thing, it is not yet clear to many organizations what it means to implement an RAI program, let alone what investments it requires. Some early adopters have set up RAI programs, such as those that are part of UNESCO’s Ibero-American Business Council for Ethics of AI. The majority of companies are still in the initial phase, trying to figure out what responsible AI means for them from an organizational and business perspective.

Startups are even less aware of the importance of RAI, since they are mostly focused on bringing their products to market. However, venture capitalists are gaining interest in the ethical and social impact of the startups they may invest in.

Overall, little has been published about organizations’ experiences, and therefore, initiatives such as Spain’s AI regulatory sandbox and UNESCO’s AI business council are important to better understanding what RAI really implies.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Strongly agree “Whereas ensuring the responsible use of AI requires the concerted involvement of many different functions and business units, the management of RAI should be centralized in a specific function, such as an AI office. The responsibility of the central function is to set the change (RAI) in motion; make the organization aware; support business units in appointing RAI champions and ensuring that they are properly trained; make sure that the organizational AI governance model is understood and followed by the business units; liaise with other relevant areas, such as ESG, privacy, security, IT, legal, and AI; set up communities of practice; support the AI ethics committee; etc. The AI office should be a small, committed, and multidisciplinary team. At the beginning, it doesn’t matter too much where it sits in the organization, as long as it is well recognized. The centralized function is especially important when starting with RAI. Once the process is internalized by the organization (like data protection is now in many organizations), the role becomes less critical for operations and can focus more on new trends and developments to continuously improve RAI processes and make them more efficient.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Disagree “An adequate RAI program evaluates the ethical and social impact of an artificial intelligence system in a particular context, for a specific use case. In this sense, RAI programs are well prepared to address the risks of generative AI, treating it as any other AI system used for a specific purpose. This implies that the same generative AI program used in different applications will be evaluated several times — for instance, when powering a medical chatbot, doing question-answering (FAQs) for how to access public services, or when automatically filling out forms.

However, an RAI program that evaluates the AI technology or algorithm without a specific use case in mind — that is, in isolation, independent of the specific use — is not appropriate for generative AI, since the ethical and social impact will depend on the specific use case. It is therefore impossible to adequately assess the responsible use of the AI system, as it will be either too restrictive or too permissive.”
RAI programs effectively address the risks of third-party AI tools. Strongly agree “Responsible AI programs should address the risk of using or integrating third-party AI tools. Many organizations already purchase AI tools from the market rather than developing them themselves. And even if they develop them themselves, they often integrate open-source AI software. Additionally, AI as a service is a trend, and thus increasingly more organizations will use AI tools in the cloud that were built by others. The European AI Act also considers the separate responsibilities of providers (those who put AI systems on the market) and users of artificial intelligence. It is therefore of utmost importance for responsible AI programs to consider the full AI value chain, in addition to in-house AI developments.

To some extent, this is like emissions reporting related to scope 1 (your own emissions) and scope 3 (emissions generated by the value chain), where scope 3 emissions are usually more impactful than those of scope 1. Likewise, in responsible AI programs, third-party AI tools, products, and software are expected to have a significant effects on the ethical and social impact of AI.”
Executives usually think of RAI as a technology issue. Neither agree nor disagree “While many executives are aware that responsible AI has an important technological component, they are also aware that RAI is related to running a responsible business, often operationalized through ESG (environment, social, and governance) activities.

However, in most companies there is only a weak connection between technical AI teams and more socially oriented ESG teams. Executive leaders should ensure that those teams are connected and orchestrate a close collaboration to accelerate the implementation of responsible AI. One solution would be to install a new (temporary) role called the chief responsible AI officer, whose main mission would be to drive RAI as a cross-company activity.”
Mature RAI programs minimize AI system failures. Agree “Responsible AI programs consider in advance the potential negative side effects of the use of AI by “forcing” teams to think about (1) relevant general questions, such as the severity, scale, and likelihood of the consequences of the failure; and (2) the specific impacts on people related to ethical AI principles and human rights, such as nondiscrimination and equal treatment, transparency and explainability, redress, adequate human control, privacy, and security. This facilitates the consideration and detection of failures of the AI system that our societies want to avoid.

However, detecting such potential failures alone does not necessarily avoid them, as it requires proper action of the organization. Organizations that have mature RAI programs are likely to act properly, but it is not a guarantee, especially when the “failure” is beneficial for the business model. This is when the rubber hits the road.”
RAI constrains AI-related innovation. Disagree “Artificial intelligence, by itself, is neither responsible nor irresponsible. It is the application of AI to specific use cases that makes it responsible or not. Innovation means to bring new things to the market. RAI implies that when developing or buying innovative systems that use AI, one considers the social and ethical impact of these systems during the full life cycle. If negative impacts are detected and cannot be mitigated, an explicit (risk-based) decision must be made to continue or not. But this is (or should be) true for any innovation, regardless of the use of AI. By not wasting resources on innovations that we don’t want to happen, we can increase our resources dedicated to desired AI applications and therefore even boost AI-related innovations.

Responsible AI is a mindset and methodology that — by design — helps focus on innovations that maximize positive impacts and minimize negative impacts.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “The likelihood of success of responsible AI efforts is increased significantly if tied to the corporate environmental, social, and corporate governance strategy. The main reason for this is that ESG is an established area in most corporations, with a team, a budget, and objectives. Moreover, ESG is gaining importance every year, given the global challenges humanity and the planet are facing. In organizations that use AI at scale, there is a close connection to all ESG elements. Firstly, large AI algorithms for natural language processing, such as GPT-3, consume huge amounts of energy and therefore have a large carbon footprint. Responsible AI works toward reducing this footprint using so-called green algorithms or green AI. Secondly, using AI without thinking in advance about the potential negative social implications may lead to all kinds of undesirable consequences (albeit unintended), such as, for example, discrimination, opacity, and loss of autonomy in decision-making. Responsible AI by design reduces the occurrence of those negative side effects. Thirdly, the implementation of responsible AI requires a strong corporate governance model.”
Responsible AI should be a part of the top management agenda. Agree “Companies that make extensive use of artificial intelligence, either for internal use or for offering products to the market, should put responsible AI on their top management agendas. For such companies, it is important to monitor — on a continuous basis — the potential social and ethical impacts of the systems that use AI on people or societies.

For new use cases or products, such companies should use a methodology like Responsible AI by Design that considers the ethical and social impact of the application on people and societies throughout the application or product life cycle. Potential issues detected in this process should be mitigated or, if not possible, prevented from being put into production.”