AI-Related Risks Test the Limits of Organizational Risk Management

A panel of experts weighs in on whether organizations are effectively adjusting their risk management practices to govern artificial intelligence.

Reading Time: 9 min 

Topics

Responsible AI

The Responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.

In collaboration with

BCG
More in this series

For the third year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented in organizations worldwide. Last year, we published a report titled “Building Robust RAI Programs as Third-Party AI Tools Proliferate.” This year, we continue to examine organizational capacity to address AI-related risks but in a landscape that includes the first comprehensive AI law on the books — the European Union’s AI Act. To kick things off, we asked our experts and one large language model to react to the following provocation: Organizations are sufficiently expanding risk management capabilities to address AI-related risks. A clear majority (62%) of our panelists disagreed or strongly disagreed with the statement, citing the speed of technological development, the ambiguous nature of the risks, and the limits of regulation as obstacles to effective risk management. Below, we share insights from our panelists and draw on our own observations and experience working on RAI initiatives to offer recommendations on how organizations might leverage organizational risk management capabilities to address AI-related risks.

Tech Advances Are Outpacing Risk Management Capabilities

Many of our experts cited the speed of AI-related technological developments as testing the limits of organizational risk management frameworks. As Stanford CodeX fellow Riyanka Roy Choudhury contends, “The rapid expansion of AI, particularly generative AI, has exceeded the operational capabilities of most organizations, [resulting in deficient] AI risk mitigation strategies.” Land O’Lakes CTO Teddy Bekele notes that “the overall preparedness to tackle AI-related risks is insufficient, mainly due to the fast-paced advancements in AI that outstrip the development and implementation of effective risk management practices.” In addition to AI’s fast-evolving capabilities, its pace of adoption is also challenging organizations. As AfroLeadership’s Belona Sonna explains, “AI systems are among the fastest-growing applications in the world. The associated risks are therefore equally dynamic.”

While Sameer Gupta, chief analytics officer at DBS Bank, and Linda Leopold, H&M Group’s head of AI strategy, argue that organizational risk management capabilities need to “keep pace” with the “rapid evolution of AI capabilities” and “the speed of tech development and the spread of usage,” they also acknowledge the challenge of doing so in practice. Leopold observes that “even for organizations that do have a solid responsible AI program in place, keeping up with the speed of development and continuously addressing new risks requires effort.”

We asked ChatGPT to respond to our provocation, and it also recognized that improving management approaches to AI-related risks is an ongoing process. ChatGPT’s response:

“Many organizations are bolstering risk management to tackle AI-related risks. This involves assessing potential biases, privacy breaches, and operational vulnerabilities inherent in AI systems. Enhanced strategies encompass proactive monitoring, robust data governance, and compliance frameworks. Despite progress, ongoing adaptation is vital to effectively mitigate emerging AI risks and ensure sustainable innovation.”

Disagree

“The technology is advancing so rapidly that there is no way to do anything ‘sufficiently,’ including expanding risk management capabilities to address AI-related risks. While we know many of the risks, I imagine a number of those to come are still unknown. The technology will keep us all on our toes for years to come because it’s developing in nanoseconds.”

Katia Walsh
Katia Walsh
Harvard Business School

The situation is even more challenging for smaller organizations, often due to their lack of expertise or resources to devote to this function. As Ya Xu, head of data and AI at LinkedIn, explains, “Establishing good risk management capabilities requires significant resources and expertise, which not all companies can afford or have available to them today.” Researcher Nanjira Sambuli similarly observes, “Micro, small, and medium enterprises that form the bulk of organizations in many economies may not yet have the capacity [for] dedicated risk management teams or the resources to use third-party risk management services.” Chevron’s chief data officer, Ellen Nielsen, agrees: “The demand for AI governance and risk experts is outpacing the supply,” she says.

Ambiguity Is a Significant Challenge

Ambiguity about the nature of AI-related risks is testing the limits of existing risk management capabilities, especially in the absence of clear and established standards for identifying, understanding, and measuring these risks. While some organizations are adapting existing risk management capabilities (such as data governance, privacy, cybersecurity, ethics, and trust and safety), others are attempting to build new AI-specific capabilities.

Cold Chain Technologies CEO Ranjeet Banerjee observes, “I do not think there is a good understanding today of AI-related risks in most organizations.” MIT professor Sanjay Sarma offers a similar observation: The “massive range of risks seems to be leading to analysis paralysis [such that] companies have not successfully captured the risk landscape.” Beyond the wide range of known risks, Leopold notes that “new risks keep emerging as technology and its areas of application evolve.” Choudhury adds, “A significant obstacle lies in comprehending and quantifying the potential risks associated with AI, particularly within smaller organizations.” As a result, Yan Chow, global health care lead at Automation Anywhere, says that “it may take AI to understand its own risks.” (Some organizations are, in fact, heading in this direction.)

A key challenge, Sambuli explains, is determining “how AI-related risk is markedly different from other risks arising from the use and diffusion of digital and emerging tech.” Shilpa Prasad, head of commercialization at LG Nova, argues that “the risks posed by AI systems are in many ways unique,” while David Hardoon, CEO at Aboitiz Data Innovation, contends that “the majority of risks are not AI-specific, such as data governance.” He adds, “In order to expand risk capabilities, there is a need to expand, review, and understand specifically what are the AI risks that differ from non-AI risks.” David Polgar, founder of All Tech Is Human, explains why that is: “AI, in particular generative AI, can have a tendency to paralyze appropriate risk responses because it is often viewed through a mystical newness lens rather than as a new technology that poses classic dilemmas around copyright, data protection, and false advertising.”

Disagree

“The rapid expansion of this technology, along with its increasing integration into various business and social operations, often surpasses the current risk management capabilities of organizations. This is due to several factors, including the lack of widely accepted standards for AI risk assessment, a shortage of experts knowledgeable in both AI and risk management, and the underestimation of the complexity and potential impact of the risks associated with implementing AI systems.”

Idoia Salazar
Idoia Salazar
OdiseIA

For some companies, another challenge is the lack of clear and consistent AI risk management frameworks. TÜV AI.Lab CEO Franziska Weindauer points to “missing frameworks and guidelines developed by knowledgeable actors in the field to help [organizations] implement a risk management system.” Similarly, for ForHumanity founder Ryan Carrier, the “failure to include diverse input and multistakeholder feedback in the risk management process [results in] limited perspectives on risk identification and a failure to disclose residual risk.” According to Andrew Strait, an associate director at Ada Lovelace Institute, “We are still in an era of testing and trialing different methods — but they are not proven to be effective.” But things may be changing.

The Role of Regulation Remains to Be Seen

Our experts are divided on the role of AI regulations. EnBW chief data officer Rainer Hoffman observes, “With the introduction of the European AI Act, which mandates risk management for high-risk applications, organizations are beginning to acknowledge the importance of AI-related risk considerations.” University of Helsinki professor Teemu Roos adds, “Companies will need to invest in compliance [with the AI Act], not unlike the introduction of GDPR in 2018.” OdiselA cofounder Richard Benjamins says that “with upcoming regulations, especially in the European Union with the AI Act, many organizations are expanding their risk management capabilities to address AI-related risks” but cautions that “the speed at which this is happening differs significantly by organization.” Unico IDtech researcher Yasodara Cordova reminds us that “it required nearly a decade of regulations for organizations to begin enhancing their risk management capabilities for privacy.”

Others are less optimistic about the efficacy of regulations. UN undersecretary general Tshilidzi Marwala contends that “the maximization of profit through AI is more incentivized than addressing AI-related risks.” Simon Chesterman, a professor at the National University of Singapore, says, “For companies, the fear of missing out often dominates.” And Carrier explains, “Individual players pay lip service to the idea of risk management [but] actively operate to subvert … policy and standards.” As a result of these concerns, Data Privacy Brasil founding director Bruno Bioni argues, “The most pressing issue is whether we, as a society, are democratically expanding our risk management capabilities.”

Recommendations

For organizations seeking to leverage their organizational risk management capacity to address AI-related risks, we recommend the following:

1. Identify first principles first. Because AI risks are dynamic and rapidly evolving, organizations should adopt a nimble approach based on high-level guiding principles and guardrails that can be applied or adapted to specific applications or advancements in AI technology rather than addressing them on an ad hoc basis.

2. Stay agile and keep learning. Organizations should recognize that collective learning on AI risks and mitigation approaches is ongoing, and their own approaches will need to rapidly evolve alongside everyone’s growing understanding.

3. Increase investments in risk mitigation tools. Organizations should seek to identify where current risk mitigation tools might address AI-related risks, including data governance and privacy, cybersecurity, ethics, and trust and safety, as well as other compliance functions, and invest in expanded risk management capabilities where existing functions fall short. Because AI risks can emerge from within the organization and outside it, risk mitigation approaches should be designed to address both types of risks.

4. Act now. While the EU’s AI Act may be the only comprehensive AI law at present, we can bet it won’t be the only one. Moreover, there is no AI exemption to the laws on the books. Given that it can take several years to put in place a comprehensive AI risk management program, organizations cannot wait for regulations to develop a deliberate and flexible approach to AI risk management.

Topics

Responsible AI

The Responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.