When People Don’t Trust Algorithms

University of Chicago professor Berkeley Dietvorst explains why we can’t let go of human judgment — to our own detriment.

Reading Time: 9 min 

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF

Even when faced with evidence that an algorithm will deliver better results than human judgment, we consistently choose to follow our own minds.

Why?

MIT Sloan Management Review editor in chief Paul Michelman sat down with Berkeley Dietvorst, assistant professor of marketing at the University of Chicago Booth School of Business, to discuss a phenomenon Dietvorst has studied in great detail. (See “Related Research.”) What follows is an edited and condensed version of their conversation.

MIT Sloan Management Review: What prompted you to investigate people’s acceptance or lack thereof of algorithms in decision-making?

Dietvorst: When I was a Ph.D. student, some of my favorite papers were old works by [the late psychology scholar and behavioral decision research expert] Robyn Dawes showing that algorithms outperform human experts at making certain types of predictions. The algorithms that Dawes was using were very simple and oftentimes not even calibrated properly.

A lot of others followed up Dawes’s work and showed that algorithms beat humans in many domains — in fact, in most of the domains that have been tested. There’s all this empirical work showing algorithms are the best alternative, but people still aren’t using them.

So we have this disconnect between what the evidence says people should do and what people are doing, and no one was researching why.

What’s an example of these simple algorithms that were already proving to be superior?

Dietvorst: One of the areas was predicting student performance during an admission review. Dawes built a simple model: Take four or five variables — GPA, test scores, etc. — assign them equal weight, average them on a numerical scale, and use that result as your prediction of how students will rank against each other in actual performance. That model — which doesn’t even try to determine the relative value of the different variables — significantly outperforms admissions experts in predicting a student’s performance.

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

Reprint #:

59106

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (3)
siswanto.gatot549
algorithms is jus a tool that has weakness and other imperfectness, so good judgment still play key roles
Abhijit Bhattacharya
It's probably a human nature to believe in our ability to beat the odds. That is why many people still keep buying lottery tickets ( in fact, lottery is a good source of revenue for governments in many parts of the world), though algorithm-wise it never makes any sense.
Stacy Shamberger
I've seen so many examples, especially in business, where an Algorithm is not correct and when unpacked - shows a variety of weaknesses, some intentional, from calculations to the actual data. Also, I've witnessed instances where data in the algorithm is skewed to produce a spin on the data results. 

It's not the algorithm that is questioned - but the data and the structure. Questioning data seems to become a part of human nature in this day and age - and rightly so, as data can be spun in so many ways. 

I love the innocence of academia around these types of things - it is refreshing and I hope it maintains its perceived “clean” standards. But what goes on inside the halls of our educational institutions may not be what is happening with data in the outside world 

Cheers!