To Be a Responsible AI Leader, Focus on Being Responsible

Findings from the 2022 Responsible AI Global Executive Study and Research Project

by: Elizabeth M. Renieris, David Kiron, and Steven Mills


As AI’s adoption grows more widespread and companies see increasing returns on their AI investments, the technology’s risks also become more apparent.1 Our recent global survey of more than 1,000 managers suggests that AI systems across industries are susceptible to failures, with nearly a quarter of respondents reporting that their organization has experienced an AI failure, ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk. It is these latter harms that responsible AI (RAI) initiatives seek to address.

Meanwhile, lawmakers are developing the first generation of meaningful AI-specific legislation.2 For example, the European Union’s proposed AI Act would create a comprehensive scheme to govern the technology. And in the U.S., lawmakers in New York, California, and other states are working on AI-specific regulations to govern its use in employment and other high-risk contexts.3 In response to the heightened stakes around AI adoption and impending regulations, organizations worldwide are affirming the need for RAI, but many are falling short when it comes to operationalizing RAI in practice.

Register to download the full report

Download the Full Report (PDF)

*Registration Required

There are, however, exceptions. A number of organizations are bridging the gap between aspirations and reality by making a philosophical and material commitment to RAI, including investing the time and resources needed to create a comprehensive RAI program. We refer to them as RAI Leaders or Leaders. They appear to enjoy clear business benefits from RAI. Our research indicates that Leaders take a more strategic approach to RAI, led by corporate values and an expansive view of their responsibility toward a wide array of stakeholders, including society as a whole. For Leaders, prioritizing RAI is inherently aligned with their broader interest in leading responsible organizations.

This MIT Sloan Management Review and Boston Consulting Group report is based on our global survey, interviews with several C-level executives, and insights gathered from an international panel of more than 25 AI experts.

References

1. T.H. Davenport and R. Bean, “Companies Are Making Serious Money With AI,” MIT Sloan Management Review, Feb. 17, 2022, https://sloanreview.mit.edu.

2. For a summary of legislative action taken in the U.S., see C. Kraczon, “The State of State AI Policy (2021-22 Legislative Session),” Electronic Privacy Information Center, Aug. 8, 2022, https://epic.org.

3. See, for example, N.E. Price, “New York City’s New Law Regulating the Use of Artificial Intelligence in Employment Decisions,” JD Supra, April 11, 2022, http://www.jdsupra.com; and J.J. Lazzarotti and R. Yang, “Draft Regulations in California Would Curb Use of AI, Automated Decision Systems in Employment,” Jackson Lewis, April 11, 2022, http://www.californiaworkplacelawblog.com.

4. S. Ransbotham, S. Khodabandeh, R. Fehling, et al., “Winning With AI,” MIT Sloan Management Review and Boston Consulting Group, Oct. 15, 2019, https://sloanreview.mit.edu.

5. D. Kiron, E. Renieris, and S. Mills, “Why Top Management Should Focus on Responsible AI,” Sloan Management Review, April 19, 2022, https://sloanreview.mit.edu.

6. Leaders are the most mature of the three maturity clusters identified by analyzing the survey results. An unsupervised machine learning algorithm (K-mean clustering) was used to identify naturally occurring clusters based on the scale and scope of the organization’s RAI implementation. Scale is defined as the degree to which RAI efforts are deployed across the enterprise (e.g., ad hoc, partial, enterprisewide). Scope includes the elements that are part of the RAI program (e.g., principles, policies, governance) and the dimensions covered by the RAI program (e.g., fairness, safety, environmental impact). Leaders were the most mature in terms of both scale and scope.

7. We offer a deeper analysis of the connection between RAI and CSR here: E.M. Renieris, D. Kiron, and S. Mills, “Should Organizations Link Responsible AI and Corporate Social Responsibility? It’s Complicated,” MIT Sloan Management Review, May 24, 2022, https://sloanreview.mit.edu.

8. To assess whether an organization’s AI use was mature or immature, we asked respondents, “What is the level of adoption of AI in your organization?” Those who selected “AI at scale with applications in most business and functional areas” or “Large number of applications in select business and functional areas” were classified as mature, and those who answered, “Only prototypes and/or pilots, without full-scale implementations” or “Some applications deployed and implemented” were classified as immature.

9. E.M. Renieris, D. Kiron, and S.D. Mills, “RAI Enables the Kind of Innovation That Matters,” MIT Sloan Management Review, May 24, 2022, https://sloanreview.mit.edu.

Reprint #:

64270

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.