Two questions that managers of intelligent machines should ask: It’s been a couple of years since Stephen Hawking warned that artificial intelligence could “spell the end of the human race.” The terminators aren’t here yet and unless they come very soon, the managers of AI-based technology have a couple of more immediate issues to address, according to Vasant Dhar of NYU’s Stern School of Business and Center for Data Science.
The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings. “There is no clear answer to this vexing issue,” says Dhar. But he does offer some guidance: Analyze the machine’s training errors; use an “adversary” — through means such as crowdsourcing — to try to trip up the machine; and estimate the cost of error scenarios to better manage risks.
The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions. “We don’t have any framework for evaluating which decisions we should be comfortable delegating to algorithms and which ones humans should retain,” he writes. “That’s surprising, given the high stakes involved.” Dhar suggests addressing this issue with a risk-oriented framework that he calls a Decision Automation Map. The map plots decisions in two independent dimensions — predictability and cost per error — and suggest whether it would be better made by human or machine.
The DAO of blockchain business: In the middle of its month-long crowdfunding campaign, a new start-up company named DAO had raised the equivalent of $150 million in ethers (ETH) digital tokens. As yet, DAO has no products, no services, and thus, no sales — except for bits (bytes?) of itself. So, what are investors getting for their money? An ownership stake in a business structure built on blockchain technology.