Justifying Human Involvement in the AI Decision-Making Loop

Despite their increasingly sophisticated decision-making abilities, AI systems still need human inputs.

Reading Time: 4 min 

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
See All Articles in This Section
Already a member?
Not a member?
Sign up today
Member
Free

5 Free Articles per month, $6.95/article thereafter. Free newsletter.

Subscribe
$75/Year

Unlimited digital content, quaterly magazine, free newsletter, entire archive.

Sign me up

In 1983, during a period of high Cold War tensions, Soviet information systems abruptly sounded an alert that warned of five incoming nuclear missiles from the United States. A lieutenant colonel of the Soviet Air Defense Forces, Stanislav Petrov, faced a difficult decision: Should he authorize a retaliatory attack? Fortunately, Petrov chose to question the system’s recommendation. Instead of approving the retaliation, he decided that a real attack was unlikely based on several outside factors — one of which was the small number of “missiles” reported by the system — and moreover, even if it was real, he didn’t want to be the one to complete the destruction of the planet. After his death in May 2017, a profile credited him with “quietly saving the world” by not escalating the situation.

As it happens, Petrov was right — the mistake was a failure of the computer system to distinguish the sun’s reflection off clouds from the light signatures relevant to a missile launch. Retaining a human mind in this decision-making loop may have saved mankind.

With increases in potential for decision-making based on artificial intelligence (AI), businesses face similar (though hopefully less consequential) questions about whether and when to remove humans from their decision-making processes.

There are no simple answers. As the 1983 incident demonstrates, a human can add value by scrutinizing a system’s results before action. But long before that, people also had a foundational role in developing the algorithms underlying the classification system and selecting the data used to train and evaluate the efficacy of the resulting system. In this case, humans could have added more value by helping the classification system prevent misclassification. Yet this training and development role doesn’t seem to make the news in the way that the intervention role does. We don’t know how many times nuclear warning systems operated amazingly well to keep from raising false alarms — we only know when they didn’t. People also add value by helping AI learn in the first place.

Before we humans get too cozy in these roles, we should be careful before extrapolating too much from this sample size of one. If humans are looking for justification for our continued involvement, the prevention of calamity is certainly valid.

Read the Full Article

Topics

Artificial Intelligence and Business Strategy

The Artificial Intelligence and Business Strategy initiative explores the growing use of artificial intelligence in the business landscape. The exploration looks specifically at how AI is affecting the development and execution of strategy in organizations.

In collaboration with

BCG
See All Articles in This Section

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.