Justifying Human Involvement in the AI Decision-Making Loop

Despite their increasingly sophisticated decision-making abilities, AI systems still need human inputs.

Reading Time: 4 min 

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

In 1983, during a period of high Cold War tensions, Soviet information systems abruptly sounded an alert that warned of five incoming nuclear missiles from the United States. A lieutenant colonel of the Soviet Air Defense Forces, Stanislav Petrov, faced a difficult decision: Should he authorize a retaliatory attack? Fortunately, Petrov chose to question the system’s recommendation. Instead of approving the retaliation, he decided that a real attack was unlikely based on several outside factors — one of which was the small number of “missiles” reported by the system — and moreover, even if it was real, he didn’t want to be the one to complete the destruction of the planet. After his death in May 2017, a profile credited him with “quietly saving the world” by not escalating the situation.

As it happens, Petrov was right — the mistake was a failure of the computer system to distinguish the sun’s reflection off clouds from the light signatures relevant to a missile launch. Retaining a human mind in this decision-making loop may have saved mankind.

With increases in potential for decision-making based on artificial intelligence (AI), businesses face similar (though hopefully less consequential) questions about whether and when to remove humans from their decision-making processes.

There are no simple answers. As the 1983 incident demonstrates, a human can add value by scrutinizing a system’s results before action. But long before that, people also had a foundational role in developing the algorithms underlying the classification system and selecting the data used to train and evaluate the efficacy of the resulting system. In this case, humans could have added more value by helping the classification system prevent misclassification. Yet this training and development role doesn’t seem to make the news in the way that the intervention role does. We don’t know how many times nuclear warning systems operated amazingly well to keep from raising false alarms — we only know when they didn’t. People also add value by helping AI learn in the first place.

Before we humans get too cozy in these roles, we should be careful before extrapolating too much from this sample size of one. If humans are looking for justification for our continued involvement, the prevention of calamity is certainly valid. The resulting emotional appeal derived from an anecdote with unacceptable consequences (“think of the children!”) is compelling. But as guidance for normal business practice, the scenario may not have much in common with the use of AI in modern business practices.

A lot has changed in 34 years. While far from perfect, there have been huge improvements in AI. Building off vast training data, prediction is much more accurate in many scenarios. Now, systems would be much less likely to misinterpret sunlight on high-altitude clouds as incoming missiles. In business, accuracy continues to improve in areas such as loan default risk, fraudulent credit card transactions, and even in less concrete (but important) decisions about potential performance of job candidates. AI continues to improve, and the clear advantage that humans once held is diminishing.

Additionally, most scenarios that businesses face are, I hope, not as consequential as nuclear counter-attack. In the prior examples, missteps will incur costs but are most likely recoverable. The repercussions of incorrect AI decisions may be far more tolerable.

One perspective to consider is: If the machines ruled the world, when would they want our assistance? In the nuclear attack scenario, the machines could predict — and identify as a negative,an unwanted outcome — the prospect of worldwide destruction. They would want our help to prevent it. We can gain insight into if and when to remove humans from an AI decision-making loop by thinking about when machine overlords would still want to retain our help to improve their decision-making.

With immature AI, the machines would recognize their own areas of inaccuracy and request our help. For example, if there are insufficient observations, humans can likely build off our breadth of experience to infer lessons from other cases in ways that machines cannot (yet). Classification does not have to be binary (missile or no missile); systems can request human help when uncertain.

When instantaneous decisions are required, there may not be time to involve humans. But business decisions often differ considerably from other AI applications like robots and self-guiding machines. While the pace of business may be ever-accelerating, many business decisions still have time for a second opinion where human general knowledge of context can add value. AI can continue to learn by asking humans to corroborate (when time allows) or by asking for additional training to correct errors (when time does not allow).

Machines may also need our help when we know the data that we’ve trained them on is imperfect. Our knowledge of the data provenance may help the machines understand their limits. We may also understand more about the underlying biases (such as sexism or racism) embedded in the “right” answers in training data that we would like to work to correct. But as demonstrated by the recent success of AlphaGo Zero at training itself and creating its own algorithms, human roles may be diminishing here as well, particularly for narrow tasks.

I’m glad Stanislav Petrov was a human in the loop in 1983. For now, at least, human involvement is still needed in developing AI decision-making capabilities, particularly for initial development and training. As AI progresses, we will gain a better understanding of where humans can and cannot add value.

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.