A Code of Ethics for Smart Machines

What’s happening this week at the intersection of management and technology.

Reading Time: 4 min 

Topics

Tech Savvy

Tech Savvy was a weekly column focused on new developments at the intersection of management and technology. For more weekly roundups for managers, see our Best of This Week series.
More in this series

Smart machines need ethics, too: Remember that movie in which a computer asked an impossibly young Matthew Broderick, “Shall we play a game?” Four decades later, it turns out that global thermonuclear war may be the least likely of a slew of ethical dilemmas associated with smart machines — dilemmas with which we are only just beginning to grapple.

The worrisome lack of a code of ethics for smart machines has not been lost on Alphabet, Amazon, Facebook, IBM, and Microsoft, according to a report by John Markoff in The New York Times. The five tech giants (if you buy Mark Zuckerberg’s contention that he isn’t running a media company) have formed an industry partnership to develop and adopt ethical standards for artificial intelligence — an effort that Markoff infers is motivated as much to head off government regulation as to safeguard the world from black-hearted machines.

On the other hand, the first of a century’s worth of quinquennial reports from Stanford’s One Hundred Year Study on Artificial Intelligence (AI100) throws the ethical ball into the government’s court. “American law represents a mixture of common law, federal, state, and local statutes and ordinances, and — perhaps of greatest relevance to AI — regulations,” its authors declare. “Depending on its instantiation, AI could implicate each of these sources of law.” But they don’t offer much concrete guidance to lawmakers or regulators — they say it’s too early in the game to do much more than noodle about where ethical (and legal) issues might emerge.

In the meantime, if you’d like to get a taste for the kinds of ethical decisions that smart machines — like self-driving cars — are already facing, visit MIT’s Moral Machine project. Run through the scenarios and decide for yourself who or what the self-driving car should kill. Aside from the fun of deciding whether to run over two dogs and a pregnant lady or drive two old guys into the concrete barrier, it’ll help the research team create a crowd-sourced view of how humans might expect of ethical machines to act. This essay from UVA’s Bobby Parmar and Ed Freeman will also help fuel your thinking.

Topics

Tech Savvy

Tech Savvy was a weekly column focused on new developments at the intersection of management and technology. For more weekly roundups for managers, see our Best of This Week series.
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Michael Zeldich
Smart machines need ethics?
Believes that human ethics will have any meaning for the "Smarter than us" machines are misleading and dangerous.
On one hand AI did not have, a basis for development of such kind of machines, but it is possible to create them and we are should be prepared.
On the other hand, for obvious reasons, "Smarter than us" machines will not perceive us, humans, as members of their society.
Therefore, the human moral rules, which are different for each and every culture, will not have any governing value for them.