Companies of all kinds are adopting artificial intelligence (AI) and machine-learning systems at an accelerated pace. International Data Corp. (IDC) projects that shipments of AI software will grow by 50% per year and will reach $57.6 billion in 2021 — up from $12 billion in 2017 and just $8 billion in 2016. AI is being applied to a range of tasks, including rating mortgage applications, spotting signs of trouble on power lines, and helping drivers navigate using location data from smartphones.
But companies are learning the hard way that developing and deploying AI and machine-learning systems is not like implementing a standard software program. What makes these programs so powerful — their ability to “learn” on their own — also makes them unpredictable and capable of errors that can harm the business.
AI’s Challenge: It’s Susceptible to Learned Bias
We frequently hear stories of AI gone awry. For instance, lenders are grappling with AI systems that unintentionally “learn” to deny credit to residents of certain zip codes, which is a violation of bank “redlining” regulations. In Florida, a program used by a county’s criminal justice system flagged African Americans who were arrested as more likely to commit another crime than whites, even though the rate of reoffending is the same for both groups. Or consider an online translation program that, when asked to translate the phrase “She is a doctor, and he is a nanny” into Turkish and then translate it back to English, spits out: “He is a doctor, and she is a nanny.”
These bias-induced situations can have serious business consequences. When AI was being used in back-office applications, the chance of bias creeping in was limited, and so was the potential damage. Now AI is being used extensively both in management decision support and customer-facing applications. Companies risk damaging people’s reputations and lives, making strategic wrong turns, offending customers, and losing sales. And the cost of AI mistakes — whether they come from bias or flat-out error based on unreliable data or faulty algorithms — is rising.
The lesson here is that AI systems, for all their amazing powers, still need continuous human intervention to stay out of trouble and do their best work.