Artificial Intelligence and Business Strategy
In collaboration withBCG
What to Read Next
Already a member?Sign in
In 1983, during a period of high Cold War tensions, Soviet information systems abruptly sounded an alert that warned of five incoming nuclear missiles from the United States. A lieutenant colonel of the Soviet Air Defense Forces, Stanislav Petrov, faced a difficult decision: Should he authorize a retaliatory attack? Fortunately, Petrov chose to question the system’s recommendation. Instead of approving the retaliation, he decided that a real attack was unlikely based on several outside factors — one of which was the small number of “missiles” reported by the system — and moreover, even if it was real, he didn’t want to be the one to complete the destruction of the planet. After his death in May 2017, a profile credited him with “quietly saving the world” by not escalating the situation.
As it happens, Petrov was right — the mistake was a failure of the computer system to distinguish the sun’s reflection off clouds from the light signatures relevant to a missile launch. Retaining a human mind in this decision-making loop may have saved mankind.
With increases in potential for decision-making based on artificial intelligence (AI), businesses face similar (though hopefully less consequential) questions about whether and when to remove humans from their decision-making processes.
There are no simple answers. As the 1983 incident demonstrates, a human can add value by scrutinizing a system’s results before action. But long before that, people also had a foundational role in developing the algorithms underlying the classification system and selecting the data used to train and evaluate the efficacy of the resulting system. In this case, humans could have added more value by helping the classification system prevent misclassification. Yet this training and development role doesn’t seem to make the news in the way that the intervention role does. We don’t know how many times nuclear warning systems operated amazingly well to keep from raising false alarms — we only know when they didn’t. People also add value by helping AI learn in the first place.
Before we humans get too cozy in these roles, we should be careful before extrapolating too much from this sample size of one. If humans are looking for justification for our continued involvement, the prevention of calamity is certainly valid.