When People Don’t Trust Algorithms
University of Chicago professor Berkeley Dietvorst explains why we can’t let go of human judgment — to our own detriment.
Even when faced with evidence that an algorithm will deliver better results than human judgment, we consistently choose to follow our own minds.
MIT Sloan Management Review editor in chief Paul Michelman sat down with Berkeley Dietvorst, assistant professor of marketing at the University of Chicago Booth School of Business, to discuss a phenomenon Dietvorst has studied in great detail. (See “Related Research.”) What follows is an edited and condensed version of their conversation.
MIT Sloan Management Review: What prompted you to investigate people’s acceptance or lack thereof of algorithms in decision-making?
Dietvorst: When I was a Ph.D. student, some of my favorite papers were old works by [the late psychology scholar and behavioral decision research expert] Robyn Dawes showing that algorithms outperform human experts at making certain types of predictions. The algorithms that Dawes was using were very simple and oftentimes not even calibrated properly.
A lot of others followed up Dawes’s work and showed that algorithms beat humans in many domains — in fact, in most of the domains that have been tested. There’s all this empirical work showing algorithms are the best alternative, but people still aren’t using them.
So we have this disconnect between what the evidence says people should do and what people are doing, and no one was researching why.
What’s an example of these simple algorithms that were already proving to be superior?
Dietvorst: One of the areas was predicting student performance during an admission review. Dawes built a simple model: Take four or five variables — GPA, test scores, etc. — assign them equal weight, average them on a numerical scale, and use that result as your prediction of how students will rank against each other in actual performance. That model — which doesn’t even try to determine the relative value of the different variables — significantly outperforms admissions experts in predicting a student’s performance.