If AI algorithms are to respond effectively in real-world situations, developers need to consider humanity’s darker impulses.
A fender bender heard around the AI world happened last week in Las Vegas when a self-driving shuttle was involved in a minor collision during its first hour of service. It is ironic that this happened in Vegas, a city based on games. How would you score this match between humans and machines? Is it 1-0 in favor of the humans, a victory for the home team?
Not so fast.
In the aftermath of the “calamity,” sensational headlines played to our default thinking that the machine was to blame. Perhaps we humans reveled a tiny bit in the schadenfreude of seeing our emerging computer overlords beaten so quickly when practice was over and the real game started.
But in this incident, the autonomous electric vehicle was shuttling eight passengers around Las Vegas’ Fremont East entertainment district when a human-operated delivery truck backed into the front bumper of the shuttle. Recognizing the oncoming truck, the shuttle stopped to avoid an accident. The human driving the truck, however, did not stop. We instead need to score this matchup as 0-1 in favor of AI.
Worse, this accident illustrates a crucial challenge in the interplay between AI and humans. Systems are typically configured in contexts without nefarious actors, where players are instead well-intentioned and follow the rules. After all, the first step is to get something working.
How does design for the “well-intentioned” manifest itself here? Consider how the situation unfolded: The shuttle seems to have accurately recognized the situation and predicted an imminent collision. This is a current strength of AI — processing input from many signals quickly to build an accurate short-term estimate of what will happen.
Given the prediction, the next step was more difficult. Should the shuttle have honked? That seems fairly risk-free. Reversed and backed away from the approaching truck? That seems more difficult and riskier than a honk. In this case, the shuttle stopped and did nothing — when in doubt, first do no harm. For imperfect AI, faced with uncertainty, a reasonable default is to stop and do nothing.
But this incident should show businesses that thinking about well-intentioned actors won’t be enough. The first law of robotics doesn’t stop with “a robot may not injure a human being”; it continues with, “or, through inaction, allow a human being to come to harm.”
Now that we know that the shuttle will stop, we have to think nefariously. For example, I live near a busy street, so there is no way that I would step out in front of traffic; there are way too many distracted drivers focused on their mobile devices and not on me. But, now that I know that vehicles will behave like the shuttle, why not step out whenever I feel like crossing the road? I can rely on excellent sensors, prediction, and lightning-fast braking to protect me. Going further, could I create traffic chaos on demand by jumping out unexpectedly? This scenario is not dissimilar to the denial-of-service attacks on computer systems, where attackers can shut down systems and hold them for ransom.
The core of the problem is transparency — perfect information versus imperfect information. When thinking about interplay between humans and machines from a game-theory perspective, information changes games radically. The prisoner’s dilemma is only interesting if both prisoners don’t know what the other will do — that is, both have imperfect information. If one prisoner does have perfect information — that is, knows what the other prisoner (with imperfect information) will do — then the dilemma no longer exists (at least for the one with perfect information).
Similarly, if humans know what AI will do, but AI systems have imperfect information, then we are creating a scenario that plays to AI’s weaknesses.
Consider human resources analytics. Once job applicants figured out that automated systems were looking at keywords, they got creative and included every possible keyword in their resume — but in white font and tiny letters. Awareness of the algorithm meant applicants could suddenly win the first round of screening for any job.
Or consider customer service. An equivalent to the first law of robotics in the customer service context might be the bromide that the “customer is always right.” For example, once information leaked that instead of waiting for a repair, customers could get a newer computer as a replacement for an older model MacBook Pro with a defunct battery — in essence, a free upgrade — Apple Inc. quickly became backlogged. As AI becomes more prevalent in customer service, consumers will game the algorithm and acquire replacement product when repair would be appropriate or full refunds when partial credit would be fair.
I’ve been concerned that the preoccupation with autonomous driving and humanoid likenesses is distracting us from the more mundane, but potentially more transformational, effects of artificial intelligence in business. While nothing is wrong with these fascinating applications of AI, managers need to learn from them, not just gawk. The shuttle example illustrates how immoral actors in a game can make the other side always lose.
AI might be great at defeating human game champions. In games like Go, chess, Jeopardy!, and poker, Las Vegas is probably betting on the machine now. But outside of these games — in real-life situations — it’s the AI that can be gamed. And we, as the designers of these AI systems, need to think through how ill-intentioned actors can play these machines. This round of the match between AI and humans will last much longer than the time it takes to make a simple repair to a fender.