Responsible AI / Panelist

Richard Benjamins

Telefónica

Spain

Richard Benjamins is chief AI and data strategist at Telefónica and founder of its Big Data for Social Good department. He is also cofounder of the Spanish Observatory for Ethical and Social Impacts of AI, an external expert to the European Parliaments AI Observatory, deputy board member of the Spanish Industrial Association for AI, and nonexecutive director of CDP. He was previously group chief data officer at AXA. Benjamins is the author of A Data-Driven Company (LID Publishing, 2021), coauthor of two other books, and has published over 100 scientific articles.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “While many executives are aware that responsible AI has an important technological component, they are also aware that RAI is related to running a responsible business, often operationalized through ESG (environment, social, and governance) activities.

However, in most companies there is only a weak connection between technical AI teams and more socially oriented ESG teams. Executive leaders should ensure that those teams are connected and orchestrate a close collaboration to accelerate the implementation of responsible AI. One solution would be to install a new (temporary) role called the chief responsible AI officer, whose main mission would be to drive RAI as a cross-company activity.”
Mature RAI programs minimize AI system failures. Agree “Responsible AI programs consider in advance the potential negative side effects of the use of AI by “forcing” teams to think about (1) relevant general questions, such as the severity, scale, and likelihood of the consequences of the failure; and (2) the specific impacts on people related to ethical AI principles and human rights, such as nondiscrimination and equal treatment, transparency and explainability, redress, adequate human control, privacy, and security. This facilitates the consideration and detection of failures of the AI system that our societies want to avoid.

However, detecting such potential failures alone does not necessarily avoid them, as it requires proper action of the organization. Organizations that have mature RAI programs are likely to act properly, but it is not a guarantee, especially when the “failure” is beneficial for the business model. This is when the rubber hits the road.”
RAI constrains AI-related innovation. Disagree “Artificial intelligence, by itself, is neither responsible nor irresponsible. It is the application of AI to specific use cases that makes it responsible or not. Innovation means to bring new things to the market. RAI implies that when developing or buying innovative systems that use AI, one considers the social and ethical impact of these systems during the full life cycle. If negative impacts are detected and cannot be mitigated, an explicit (risk-based) decision must be made to continue or not. But this is (or should be) true for any innovation, regardless of the use of AI. By not wasting resources on innovations that we don’t want to happen, we can increase our resources dedicated to desired AI applications and therefore even boost AI-related innovations.

Responsible AI is a mindset and methodology that — by design — helps focus on innovations that maximize positive impacts and minimize negative impacts.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “The likelihood of success of responsible AI efforts is increased significantly if tied to the corporate environmental, social, and corporate governance strategy. The main reason for this is that ESG is an established area in most corporations, with a team, a budget, and objectives. Moreover, ESG is gaining importance every year, given the global challenges humanity and the planet are facing. In organizations that use AI at scale, there is a close connection to all ESG elements. Firstly, large AI algorithms for natural language processing, such as GPT-3, consume huge amounts of energy and therefore have a large carbon footprint. Responsible AI works toward reducing this footprint using so-called green algorithms or green AI. Secondly, using AI without thinking in advance about the potential negative social implications may lead to all kinds of undesirable consequences (albeit unintended), such as, for example, discrimination, opacity, and loss of autonomy in decision-making. Responsible AI by design reduces the occurrence of those negative side effects. Thirdly, the implementation of responsible AI requires a strong corporate governance model.”
Responsible AI should be a part of the top management agenda. Agree “Companies that make extensive use of artificial intelligence, either for internal use or for offering products to the market, should put responsible AI on their top management agendas. For such companies, it is important to monitor — on a continuous basis — the potential social and ethical impacts of the systems that use AI on people or societies.

For new use cases or products, such companies should use a methodology like Responsible AI by Design that considers the ethical and social impact of the application on people and societies throughout the application or product life cycle. Potential issues detected in this process should be mitigated or, if not possible, prevented from being put into production.”