Are AI Learning Scenarios Unpredictable Enough?
If AI algorithms are to respond effectively in real-world situations, developers need to consider humanity’s darker impulses.
Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCGA fender bender heard around the AI world happened last week in Las Vegas when a self-driving shuttle was involved in a minor collision during its first hour of service. It is ironic that this happened in Vegas, a city based on games. How would you score this match between humans and machines? Is it 1-0 in favor of the humans, a victory for the home team?
Not so fast.
In the aftermath of the “calamity,” sensational headlines played to our default thinking that the machine was to blame. Perhaps we humans reveled a tiny bit in the schadenfreude of seeing our emerging computer overlords beaten so quickly when practice was over and the real game started.
But in this incident, the autonomous electric vehicle was shuttling eight passengers around Las Vegas’ Fremont East entertainment district when a human-operated delivery truck backed into the front bumper of the shuttle. Recognizing the oncoming truck, the shuttle stopped to avoid an accident. The human driving the truck, however, did not stop. We instead need to score this matchup as 0-1 in favor of AI.
Worse, this accident illustrates a crucial challenge in the interplay between AI and humans. Systems are typically configured in contexts without nefarious actors, where players are instead well-intentioned and follow the rules. After all, the first step is to get something working.
How does design for the “well-intentioned” manifest itself here? Consider how the situation unfolded: The shuttle seems to have accurately recognized the situation and predicted an imminent collision. This is a current strength of AI — processing input from many signals quickly to build an accurate short-term estimate of what will happen.
Given the prediction, the next step was more difficult. Should the shuttle have honked? That seems fairly risk-free. Reversed and backed away from the approaching truck? That seems more difficult and riskier than a honk. In this case, the shuttle stopped and did nothing — when in doubt, first do no harm. For imperfect AI, faced with uncertainty, a reasonable default is to stop and do nothing.
But this incident should show businesses that thinking about well-intentioned actors won’t be enough. The
Comments (4)
Shrikant Navelkar
Dante Rossi
Saradhi Motamarri
Wesley Alves Machado