Can We Solve AI’s ‘Trust Problem’?

To address users’ wariness, makers of AI applications should stop overpromising, become more transparent, and consider third-party certification.

Reading Time: 5 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF

The sad fact is that many people don’t trust decisions, answers, or recommendations from artificial intelligence. In one survey of U.S. consumers, when presented with a list of popular AI services (for example, home assistants, financial planning, medical diagnosis, and hiring), 41.5% of respondents said they didn’t trust any of these services. Only 9% of respondents said they trusted AI with their financials, and only 4% trusted AI in the employee hiring process.1 In another survey, 2,000 U.S. consumers were asked, “When you think about AI, which feelings best describe your emotions?” “Interested” was the most common response (45%), but it was closely followed by “concerned” (40.5%), “skeptical” (40.1%), “unsure” (39.1%), and “suspicious” (29.8%).2

What’s the problem here? And can it be overcome? I believe several issues need to be addressed if AI is to be trusted in businesses and in society.

Rein in the Promises

The IT research firm Gartner suggests that technologies like cognitive computing, machine learning, deep learning, and cognitive expert advisers are at the peak of their hype cycle and are headed toward the “trough of disillusionment.”3

Vendors may be largely to blame for this issue. Consider IBM’s very large Watson advertising budget and extravagant claims about Watson’s abilities. One prominent AI researcher, Oren Etzioni, has called Watson “the Donald Trump of the AI industry — [making] outlandish claims that aren’t backed by credible data.”4

Tesla’s Elon Musk is another frequent contributor to AI hype, particularly about the ability of Tesla cars to drive autonomously. The company uses the term autopilot to describe its capabilities, which suggests full autonomy and has generated controversy.5 Tesla cars have some impressive semiautonomous driving capabilities and are impressive vehicles in many other respects, but clearly they are not yet fully autonomous.

Fortunately, not all companies are overselling their AI capabilities. Take, for instance, the Nordic bank SEB and its use of Aida, an intelligent agent that’s derived from Ipsoft’s Amelia.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

References

1. K. Krogue, “Artificial Intelligence Is Here to Stay, but Consumer Trust Is a Must for AI in Business,” Forbes, Sept. 11, 2017.

2. SYZYGY, “Sex, Lies, and AI,” SYZYGY Digital Insight Report 2017 (U.S. version).

3. K. Panetta, “Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017,” press release, Aug. 15, 2017.

4. J. Brown, “Why Everyone Is Hating on IBM Watson, Including the People Who Helped Make It,” Gizmodo, Aug. 10, 2017.

5. R. Mitchell, “Controversy Over Tesla ‘Autopilot’ Name Keeps Growing,” Los Angeles Times, July 21, 2016.

6. E. Lundin, interview with the author, February 2018; and SEB, “Burning Passion to Use AI for World-Class Service,” press release, Aug. 21, 2017, https://sebgroup.com.

7. SYZYGY, “Sex, Lies, and AI.”

8. A. Schneider, personal communication with the author, Jan. 22, 2018.

9. Deloitte, “Robo-Advising Platforms Carry New Risks in Asset Management,” Perspectives (n.d.).

Reprint #:

60217

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.