Responsible AI / Panelist

Richard Benjamins

Telefónica

Spain

Richard Benjamins is chief AI and data strategist at Telefónica and founder of its Big Data for Social Good department. He is also cofounder of the Spanish Observatory for Ethical and Social Impacts of AI, an external expert to the European Parliaments AI Observatory, deputy board member of the Spanish Industrial Association for AI, and nonexecutive director of CDP. He was previously group chief data officer at AXA. Benjamins is the author of A Data-Driven Company (LID Publishing, 2021), coauthor of two other books, and has published over 100 scientific articles.

Voting History

Statement Response
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree “Business communities are becoming more aware of the importance of responsible AI programs through publications and other media, but also through an increasing call for regulations and awareness of international recommendations.

But currently, this increased awareness is not yet converting into actions supported by adequate investments in RAI programs. For one thing, it is not yet clear to many organizations what it means to implement an RAI program, let alone what investments it requires. Some early adopters have set up RAI programs, such as those that are part of UNESCO’s Ibero-American Business Council for Ethics of AI. The majority of companies are still in the initial phase, trying to figure out what responsible AI means for them from an organizational and business perspective.

Startups are even less aware of the importance of RAI, since they are mostly focused on bringing their products to market. However, venture capitalists are gaining interest in the ethical and social impact of the startups they may invest in.

Overall, little has been published about organizations’ experiences, and therefore, initiatives such as Spain’s AI regulatory sandbox and UNESCO’s AI business council are important to better understanding what RAI really implies.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Strongly agree “Whereas ensuring the responsible use of AI requires the concerted involvement of many different functions and business units, the management of RAI should be centralized in a specific function, such as an AI office. The responsibility of the central function is to set the change (RAI) in motion; make the organization aware; support business units in appointing RAI champions and ensuring that they are properly trained; make sure that the organizational AI governance model is understood and followed by the business units; liaise with other relevant areas, such as ESG, privacy, security, IT, legal, and AI; set up communities of practice; support the AI ethics committee; etc. The AI office should be a small, committed, and multidisciplinary team. At the beginning, it doesn’t matter too much where it sits in the organization, as long as it is well recognized. The centralized function is especially important when starting with RAI. Once the process is internalized by the organization (like data protection is now in many organizations), the role becomes less critical for operations and can focus more on new trends and developments to continuously improve RAI processes and make them more efficient.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Disagree “An adequate RAI program evaluates the ethical and social impact of an artificial intelligence system in a particular context, for a specific use case. In this sense, RAI programs are well prepared to address the risks of generative AI, treating it as any other AI system used for a specific purpose. This implies that the same generative AI program used in different applications will be evaluated several times — for instance, when powering a medical chatbot, doing question-answering (FAQs) for how to access public services, or when automatically filling out forms.

However, an RAI program that evaluates the AI technology or algorithm without a specific use case in mind — that is, in isolation, independent of the specific use — is not appropriate for generative AI, since the ethical and social impact will depend on the specific use case. It is therefore impossible to adequately assess the responsible use of the AI system, as it will be either too restrictive or too permissive.”
RAI programs effectively address the risks of third-party AI tools. Strongly agree “Responsible AI programs should address the risk of using or integrating third-party AI tools. Many organizations already purchase AI tools from the market rather than developing them themselves. And even if they develop them themselves, they often integrate open-source AI software. Additionally, AI as a service is a trend, and thus increasingly more organizations will use AI tools in the cloud that were built by others. The European AI Act also considers the separate responsibilities of providers (those who put AI systems on the market) and users of artificial intelligence. It is therefore of utmost importance for responsible AI programs to consider the full AI value chain, in addition to in-house AI developments.

To some extent, this is like emissions reporting related to scope 1 (your own emissions) and scope 3 (emissions generated by the value chain), where scope 3 emissions are usually more impactful than those of scope 1. Likewise, in responsible AI programs, third-party AI tools, products, and software are expected to have a significant effects on the ethical and social impact of AI.”
Executives usually think of RAI as a technology issue. Neither agree nor disagree “While many executives are aware that responsible AI has an important technological component, they are also aware that RAI is related to running a responsible business, often operationalized through ESG (environment, social, and governance) activities.

However, in most companies there is only a weak connection between technical AI teams and more socially oriented ESG teams. Executive leaders should ensure that those teams are connected and orchestrate a close collaboration to accelerate the implementation of responsible AI. One solution would be to install a new (temporary) role called the chief responsible AI officer, whose main mission would be to drive RAI as a cross-company activity.”
Mature RAI programs minimize AI system failures. Agree “Responsible AI programs consider in advance the potential negative side effects of the use of AI by “forcing” teams to think about (1) relevant general questions, such as the severity, scale, and likelihood of the consequences of the failure; and (2) the specific impacts on people related to ethical AI principles and human rights, such as nondiscrimination and equal treatment, transparency and explainability, redress, adequate human control, privacy, and security. This facilitates the consideration and detection of failures of the AI system that our societies want to avoid.

However, detecting such potential failures alone does not necessarily avoid them, as it requires proper action of the organization. Organizations that have mature RAI programs are likely to act properly, but it is not a guarantee, especially when the “failure” is beneficial for the business model. This is when the rubber hits the road.”
RAI constrains AI-related innovation. Disagree “Artificial intelligence, by itself, is neither responsible nor irresponsible. It is the application of AI to specific use cases that makes it responsible or not. Innovation means to bring new things to the market. RAI implies that when developing or buying innovative systems that use AI, one considers the social and ethical impact of these systems during the full life cycle. If negative impacts are detected and cannot be mitigated, an explicit (risk-based) decision must be made to continue or not. But this is (or should be) true for any innovation, regardless of the use of AI. By not wasting resources on innovations that we don’t want to happen, we can increase our resources dedicated to desired AI applications and therefore even boost AI-related innovations.

Responsible AI is a mindset and methodology that — by design — helps focus on innovations that maximize positive impacts and minimize negative impacts.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “The likelihood of success of responsible AI efforts is increased significantly if tied to the corporate environmental, social, and corporate governance strategy. The main reason for this is that ESG is an established area in most corporations, with a team, a budget, and objectives. Moreover, ESG is gaining importance every year, given the global challenges humanity and the planet are facing. In organizations that use AI at scale, there is a close connection to all ESG elements. Firstly, large AI algorithms for natural language processing, such as GPT-3, consume huge amounts of energy and therefore have a large carbon footprint. Responsible AI works toward reducing this footprint using so-called green algorithms or green AI. Secondly, using AI without thinking in advance about the potential negative social implications may lead to all kinds of undesirable consequences (albeit unintended), such as, for example, discrimination, opacity, and loss of autonomy in decision-making. Responsible AI by design reduces the occurrence of those negative side effects. Thirdly, the implementation of responsible AI requires a strong corporate governance model.”
Responsible AI should be a part of the top management agenda. Agree “Companies that make extensive use of artificial intelligence, either for internal use or for offering products to the market, should put responsible AI on their top management agendas. For such companies, it is important to monitor — on a continuous basis — the potential social and ethical impacts of the systems that use AI on people or societies.

For new use cases or products, such companies should use a methodology like Responsible AI by Design that considers the ethical and social impact of the application on people and societies throughout the application or product life cycle. Potential issues detected in this process should be mitigated or, if not possible, prevented from being put into production.”