A Code of Ethics for Smart Machines

What’s happening this week at the intersection of management and technology.

Reading Time: 4 min 


Tech Savvy

Tech Savvy was a weekly column focused on new developments at the intersection of management and technology. For more weekly roundups for managers, see our Best of This Week series.
See All Articles in This Series

Smart machines need ethics, too: Remember that movie in which a computer asked an impossibly young Matthew Broderick, “Shall we play a game?” Four decades later, it turns out that global thermonuclear war may be the least likely of a slew of ethical dilemmas associated with smart machines — dilemmas with which we are only just beginning to grapple.

The worrisome lack of a code of ethics for smart machines has not been lost on Alphabet, Amazon, Facebook, IBM, and Microsoft, according to a report by John Markoff in The New York Times. The five tech giants (if you buy Mark Zuckerberg’s contention that he isn’t running a media company) have formed an industry partnership to develop and adopt ethical standards for artificial intelligence — an effort that Markoff infers is motivated as much to head off government regulation as to safeguard the world from black-hearted machines.

On the other hand, the first of a century’s worth of quinquennial reports from Stanford’s One Hundred Year Study on Artificial Intelligence (AI100) throws the ethical ball into the government’s court. “American law represents a mixture of common law, federal, state, and local statutes and ordinances, and — perhaps of greatest relevance to AI — regulations,” its authors declare. “Depending on its instantiation, AI could implicate each of these sources of law.” But they don’t offer much concrete guidance to lawmakers or regulators — they say it’s too early in the game to do much more than noodle about where ethical (and legal) issues might emerge.

In the meantime, if you’d like to get a taste for the kinds of ethical decisions that smart machines — like self-driving cars — are already facing, visit MIT’s Moral Machine project. Run through the scenarios and decide for yourself who or what the self-driving car should kill. Aside from the fun of deciding whether to run over two dogs and a pregnant lady or drive two old guys into the concrete barrier, it’ll help the research team create a crowd-sourced view of how humans might expect of ethical machines to act. This essay from UVA’s Bobby Parmar and Ed Freeman will also help fuel your thinking.

Shrugging off blockchain hacks: Speaking of ethics, can anyone tell me how to rip off $100 million or so in bitcoins? It seems like a surefire way to top off my retirement account. Heck, according to a new Reuters article by Gertrude Chavez-Dreyfuss, “a third of bitcoin trading platforms have been hacked, and nearly half have closed in the half dozen years since they burst on the scene.”

This seems like a pretty abysmal reflection on blockchains — the distributed ledger technology behind bitcoins that is supposed to secure just about every kind of asset transaction known to humankind. But it doesn’t seem to be slowing adoptions. One of the latest was just announced by UBS, which, reports Jemima Kelly in Reuters, has teamed up with three other major banks to make payments and settle transactions using blockchain technology.

“Blockchain projects such as this have the potential to shake up the settlement system used by banks, under which transactions can take several days to finalize and which costs the financial industry $65-$80 billion a year,” writes Kelly, who adds that an estimated 80% of the world’s commercial banks will have launched blockchain projects by next year.

In case you’re not entirely clear on what blockchain is or why it’s so popular these days, Don Tapscott’s newly-posted TED talk is worth a listen. “For the first time now in human history, people everywhere can trust each other and transact peer-to-peer,” says Tapscott of blockchain technology. “And trust is established not by some big institution, but by collaboration, by cryptography, and by some clever code.”

Sounds promising, but given all of those bitcoin hacks, does that clever code need an update?

Goal-setting on steroids: It’s been a long time since Peter Drucker wove together the threads that became management by objectives (MBO) — the systemic articulation and collective pursuit of business goals. Over the years, many companies embraced the concept, and it evolved into a host of goal-setting systems. And now, with a boost from digital technologies, it’s being supercharged.

One example comes from a company named BetterWorks, which is the subject of a new article in First Round Review. BetterWorks has taken the Objectives and Key Results (OKRs) system developed at Intel and Oracle in the 1980s and adapted by Google, added goal-science principles to it, and embedded it all in software.

The software, reports First Round Review, connects everyone in a company to corporate goals, enhances engagement and cross-functional collaboration, and continually tracks progress. “The quantified self movement — all these activity trackers like FitBit and Jawbone — have proven that people want to get frequent, measurable, visual and — this is key — graphical feedback,” explains BetterWorks CEO William Duggan. “When individuals get this kind of positive, visual feedback, it literally shapes their behavior to take a more measured and consistent approach toward their goals.”

A digitized goal-setting system also allows for very granular approach to goal setting and attainment: In 2015, for example, a 1,000-employee Internet company adopted BetterWorks software and set 20,000 objectives with 80,000 key results.


Tech Savvy

Tech Savvy was a weekly column focused on new developments at the intersection of management and technology. For more weekly roundups for managers, see our Best of This Week series.
See All Articles in This Series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Michael Zeldich
Smart machines need ethics?
Believes that human ethics will have any meaning for the "Smarter than us" machines are misleading and dangerous.
On one hand AI did not have, a basis for development of such kind of machines, but it is possible to create them and we are should be prepared.
On the other hand, for obvious reasons, "Smarter than us" machines will not perceive us, humans, as members of their society.
Therefore, the human moral rules, which are different for each and every culture, will not have any governing value for them.