Building Robust RAI Programs as Third-Party AI Tools Proliferate

Findings from the 2023 Responsible AI Global Executive Study and Research Project

by: Elizabeth M. Renieris, David Kiron, and Steven Mills

In just a few short months since its release, OpenAI’s ChatGPT tool has catapulted the capabilities, as well as the ethical challenges and failures, of artificial intelligence into the spotlight. Countless examples have emerged of the chatbot fabricating stories, including falsely accusing a law professor of sexual harassment and implicating an Australian mayor in a fake bribery scandal, leading to the first lawsuit against an AI chatbot for defamation.1 In April, Samsung made headlines when three of its employees accidentally leaked confidential company information, including internal meeting notes and source code, by inputting it into ChatGPT.2 That news prompted many companies, such as JPMorgan and Verizon, to block access to AI chatbots from corporate systems.3 In fact, nearly half of the companies polled in a recent Bloomberg survey reported that they are actively working on policies for employee chatbot use, suggesting that a significant share of businesses were caught off guard and were unprepared for these developments.4

Indeed, the fast pace of AI advancements is making it harder to use AI responsibly and is putting pressure on responsible AI (RAI) programs to keep up. For example, companies’ growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI — algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio — is exposing them to new commercial, legal, and reputational risks that are difficult to track.5 In some cases, managers may lack any awareness about the use of such tools by employees or others in the organization — a phenomenon known as shadow AI.6 As Stanford Law CodeX fellow Riyanka Roy Choudhury puts it, “RAI frameworks were not written to deal with the sudden, unimaginable number of risks that generative AI tools are introducing.”


This trend is especially problematic for organizations with RAI programs that are primarily focused on AI tools and systems that they design and develop internally.

References

1. P. Dixit, “U.S. Law Professor Claims ChatGPT Falsely Accused Him of Sexual Assault, Says ‘Cited Article Was Never Written,’” Business Today, April 8, 2023, www.businesstoday.in; and T. Gerken, “ChatGPT: Mayor Starts Legal Bid Over False Bribery Claim,” BBC, April 6, 2023, www.bbc.com.

2. M. DeGeurin, “Oops: Samsung Employees Leaked Confidential Data to ChatGPT,” Gizmodo, April 6, 2023, https://gizmodo.com.

3. A. Lukpat, “JPMorgan Restricts Employees From Using ChatGPT,” The Wall Street Journal, Feb. 22, 2023, www.wsj.com.

4. J. Constantz, “Nearly Half of Firms Are Drafting Policies on ChatGPT Use,” Bloomberg, March 20, 2023, www.bloomberg.com.

5. “Generative AI,” BCG, accessed May 24, 2023, www.bcg.com.

6. J.K. Bickford and T. Roselund, “How to Put Generative AI to Work — Responsibly,” BCG, Feb. 28, 2023, www.bcg.com.

7. “Fact Sheet: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation That Protects Americans’ Rights and Safety,” The White House, May 4, 2023, www.whitehouse.gov.

8. E.M. Renieris, D. Kiron, and S. Mills, “To Be a Responsible AI Leader, Focus on Being Responsible,” MIT Sloan Management Review and BCG, Sept. 19, 2022, https://sloanreview.mit.edu.

9. L. Mearian, “Legislation to Rein In AI’s Use in Hiring Grows,” Computerworld, April 1, 2023, www.computerworld.com.

10. “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” European Commission, April 21, 2021, https://eur-lex.europa.eu; and “Proposal for a Directive of the European Parliament and of the Council on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive),” PDF file (Brussels: European Commission, Sept. 28, 2022), https://commission.europa.eu.

11. “DPAs, Global Stakeholders Mull AI Regulation,” IAPP, April 21, 2023, https://iapp.org.

12. M. Atleson, “The Luring Test: AI and the Engineering of Consumer Trust,” Federal Trade Commission, May 1, 2023, www.ftc.gov.

13. “Statement From Vice President Harris After Meeting With CEOs on Advancing Responsible Artificial Intelligence Innovation,” The White House, May 4, 2023, www.whitehouse.gov.

14. S. Mills, S. Singer, A. Gupta, et al., “Responsible AI Is About More Than Avoiding Risk,” BCG, Sept. 20, 2022, www.bcg.com.

i. The samples used to construct the maturity index differed from 2022 to 2023 due to different individuals surveyed each year.

Reprint #:

65103

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.