The Fatal Flaw of AI Implementation
Jeanne Ross is principal research scientist for MIT’s Center for Information Systems Research.
There is no question that artificial intelligence (AI) is presenting huge opportunities for companies to automate business processes. However, as you prepare to insert machine learning applications into your business processes, I’d recommend that you not fantasize about how a computer that can win at Go or poker can surely help you win in the marketplace. A better reference point will be your experience implementing your enterprise resource planning (ERP) or another enterprise system. Yes, effective ERP implementations enhanced the competitiveness of many companies, but a greater number of companies found the experience more of a nightmare. The promised opportunity never came to fruition.
Why am I raining on the AI parade? Because, as with enterprise systems, AI inserted into businesses drives value by improving processes through automation. But eventually, the outputs of most automated processes require people to do something. As most managers have learned the hard way, computers can process data just fine, but that processing isn’t worth much if people are feeding them bad data in the first place or don’t know what to do with information or analysis once it’s provided.
With Cynthia Beath, Monideepa Tarafdar, and Kate Moloney, I’ve been studying how companies insert value-adding AI algorithms into their processes. As other researchers and practitioners have observed, we are finding that most machine learning applications augment, rather than replace, human efforts. In doing so, they demand changes in what people are doing. And in the case of AI — even more than was true with ERPs — those changes eliminate many nonspecialized tasks and create skilled tasks that require good judgment and domain expertise.
For example, fraud detection applications will reduce the time that people spend looking for anomalies, but increase requirements for deciding what to do about those anomalies. An AI application might allow financial analysts to spend less time extracting data on financial performance, but it adds value only if someone spends more time considering the implications of that performance. Augmented with AI applications, customer service staff can spend fewer hours resolving routine problems, but they are more likely to improve the company if at least some of that saved time is reallocated to better understanding the problems customers are experiencing with the company’s most recent offerings.
Many leaders suspect that they will generate value from AI by recruiting more data scientists. Of course, there’s a shortage of data scientists — and some of them are more attracted to the challenge of building an application that wins at poker than solving a business need. Others will be inspired to find a cure for cancer or to stop global warming. So financial services and insurance companies attempting to uncover fraud and tech companies hoping to improve customer satisfaction will be fighting over the remaining talent.
But securing data scientists is not your biggest challenge. Data scientists can develop useful algorithms, but domain experts are needed to help train the machine to recognize important patterns and understand new data. Domain experts include top analysts, contract managers, salespeople, recruiters, and other specialists who are not only experts at their jobs but who are acutely aware of how they deliver excellence. That may involve just a few key people for a given application, but they’d better be good. And we still haven’t gotten to the really hard part!
Ultimately, you need people who can use probabilistic output to guide actions that make your company more effective. Probabilistic outputs are no problem when Salesforce.com Inc.’s AI tool Einstein indicates that one lead has a 95% chance of converting into a sale, while another has a 60% chance. The salesperson knows what to do with that information. But when a recruiter learns from an AI application that a job candidate has a 50% likelihood of being a good fit for a particular opening, what’s the next step?
This page contains form, you can see it - here.
When a machine-learning application is helping a lawyer identify potentially relevant legal precedents, or helping a vendor management team ensure compliance with a contract, or helping a banker decide whether a particular customer qualifies for a loan, the machine is taking over mundane tasks. Machines can surely learn to develop spreadsheets and search large databases for relevant information. But to generate competitive advantage from machine-learning applications, you’ll need to upgrade your people’s skills. You’ll also need to redesign their accountabilities, so that they are empowered and motivated to deploy machines when they believe that doing so will enhance outcomes. In short, you will need to build an entire workforce of intelligence-consuming, action-oriented superstars.
There are examples, of course, when AI algorithms fully automate a process rather than augment human efforts. Google DeepMind might automatically adjust temperature settings in a data center. Similarly, IBM Watson can trigger automated customer alerts to insurance customers in an area likely to be hit by a hailstorm. But these are exceptions. More often, machine-learning applications are helping people accomplish something. Like people, the machine has natural limits, which tend to leave part of the task — the part that doesn’t fit the algorithm well — to a person. When a machine detects fraud or predicts customer or employee churn with 90% accuracy, people must address the other 10% — which will be the toughest 10%. The machine will assuredly take care of the easy ones.
Addressing the toughest cases is particularly challenging because AI algorithms produce indecipherable results. When a machine learning algorithm decides who gets a loan and who doesn’t, forget about trying to advise a client about how to qualify. Financial institutions cannot use continuously learning algorithms for fraud detection because they aren’t stable and thus not explainable. Machine intelligence is not a substitute for human intelligence — we need to know why we’re doing what we’re doing!
None of the issues associated with AI augmentation of your people are insurmountable. Great companies are already empowering their people with better information produced by smart machines. Those machines sift through far more data much faster than people can. They also discover complex relationships that can be exposed only with massive amounts of data and a large pool of contrasting outcomes. Companies are succeeding with AI by partnering smart machines with smart people, who are learning how to take advantage of what those machines can do. In short, AI implementation success depends on your ability to hire and develop problem-solvers, equip them with data (and potentially AI), and then empower them to actually solve problems. Note that addressing skill requirements this way may well trash your existing hiring and development practices.
Companies that view smart machines purely as a cost-cutting opportunity are likely to insert them in all the wrong places and all the wrong ways. These companies will automate existing processes rather than imagine new ones. They will cut jobs rather than upgrade roles. These are the companies who will find that implementing AI is little more than a reprise of the ERP nightmare.