What to Read Next
We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to McKinsey’s 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment.
Email Updates on AI, Data, & Machine Learning
Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.
Please enter a valid email address
Thank you for signing up
Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”1
Why is this so urgent? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. In June 2020, IBM stopped selling the technology altogether. That same month, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.
Read the Full ArticleAlready a subscriber? Sign in
1. NACD Blue Ribbon Commission, “Fit for the Future: An Urgent Imperative for Board Leadership” (Washington, D.C.: NACD, 2019).
2. A. Bonime-Blanc, “Gloom to Boom: How Leaders Transform Risk Into Resilience and Value,” 1st ed. (New York: Routledge, 2019).
3. Whereas the Caremark line of cases have long held that “only a sustained or systemic failure of the board to exercise oversight — such as an utter failure to attempt to assure a reasonable information and reporting system exists — will establish the lack of good faith that is a necessary condition of liability,” recent case law recognized a broader set of conditions under which directors might encounter liability. For example, see Wells Fargo & Co. Shareholder Derivative Litig., C.A. No. 16-cv-05541-JST (N.D. Cal. Oct. 4, 2017); Marchand v. Barnhill, 212 A.3d 805 (Del. June 18, 2019); Clovis Oncology Inc. Derivative Litig., C.A. No. 2017-0222-JRS (Del. Ch. Oct. 1, 2019); and Hughes v. Hu, C.A. No. 2019-0112-JTL (Del. Ch. April 27, 2020).