What to Read Next
Already a member?Sign in
Smart machines need ethics, too: Remember that movie in which a computer asked an impossibly young Matthew Broderick, “Shall we play a game?” Four decades later, it turns out that global thermonuclear war may be the least likely of a slew of ethical dilemmas associated with smart machines — dilemmas with which we are only just beginning to grapple.
The worrisome lack of a code of ethics for smart machines has not been lost on Alphabet, Amazon, Facebook, IBM, and Microsoft, according to a report by John Markoff in The New York Times. The five tech giants (if you buy Mark Zuckerberg’s contention that he isn’t running a media company) have formed an industry partnership to develop and adopt ethical standards for artificial intelligence — an effort that Markoff infers is motivated as much to head off government regulation as to safeguard the world from black-hearted machines.
On the other hand, the first of a century’s worth of quinquennial reports from Stanford’s One Hundred Year Study on Artificial Intelligence (AI100) throws the ethical ball into the government’s court. “American law represents a mixture of common law, federal, state, and local statutes and ordinances, and — perhaps of greatest relevance to AI — regulations,” its authors declare. “Depending on its instantiation, AI could implicate each of these sources of law.” But they don’t offer much concrete guidance to lawmakers or regulators — they say it’s too early in the game to do much more than noodle about where ethical (and legal) issues might emerge.
In the meantime, if you’d like to get a taste for the kinds of ethical decisions that smart machines — like self-driving cars — are already facing, visit MIT’s Moral Machine project. Run through the scenarios and decide for yourself who or what the self-driving car should kill. Aside from the fun of deciding whether to run over two dogs and a pregnant lady or drive two old guys into the concrete barrier, it’ll help the research team create a crowd-sourced view of how humans might expect of ethical machines to act. This essay from UVA’s Bobby Parmar and Ed Freeman will also help fuel your thinking.