What Humans Lose When We Let AI Decide

Why you should start worrying about artificial intelligence now.

Reading Time: 9 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF

Yarek Waszul/theispot.com

It’s been more than 50 years since HAL, the malevolent computer in the movie 2001: A Space Odyssey, first terrified audiences by turning against the astronauts he was supposed to protect. That cinematic moment captures what many of us still fear in AI: that it may gain superhuman powers and subjugate us. But instead of worrying about futuristic sci-fi nightmares, we should instead wake up to an equally alarming scenario that is unfolding before our eyes: We are increasingly, unsuspectingly yet willingly, abdicating our power to make decisions based on our own judgment, including our moral convictions. What we believe is “right” risks becoming no longer a question of ethics but simply what the “correct” result of a mathematical calculation is.

Day to day, computers already make many decisions for us, and on the surface, they seem to be doing a good job. In business, AI systems execute financial transactions and help HR departments assess job applicants. In our private lives, we rely on personalized recommendations when shopping online, monitor our physical health with wearable devices, and live in homes equipped with “smart” technologies that control our lighting, climate, entertainment systems, and appliances.

Unfortunately, a closer look at how we are using AI systems today suggests that we may be wrong in assuming that their growing power is mostly for the good. While much of the current critique of AI is still framed by science fiction dystopias, the way it is being used now is increasingly dangerous. That’s not because Google and Alexa are breaking bad but because we now rely on machines to make decisions for us and thereby increasingly substitute data-driven calculations for human judgment. This risks changing our morality in fundamental, perhaps irreversible, ways, as we argued in our recent essay in Academy of Management Learning & Education (which we’ve drawn on for this article).1

When we employ judgment, our decisions take into account the social and historical context and different possible outcomes, with the aim, as philosopher John Dewey wrote, “to carry an incomplete situation to its fulfilment.”2 Judgment relies not only on reasoning but also, and importantly so, on capacities such as imagination, reflection, examination, valuation, and empathy. Therefore, it has an intrinsic moral dimension.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

References

1. C. Moser, F. den Hond, and D. Lindebaum, “Morality in the Age of Artificially Intelligent Algorithms,” Academy of Management Learning & Education, April 7, 2021, https://journals.aom.org.

2. J. Dewey, “Essays in Experimental Logic” (Chicago: University of Chicago Press, 1916), 362.

3. B.C. Smith, “ The Promise of Artificial Intelligence: Reckoning and Judgment” (Cambridge, Massachusetts: MIT Press, 2019).

4. K.B. Forrest, “When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence” (Singapore: World Scientific Publishing, 2021).

5. J. MacCormick, “Nine Algorithms That Changed the Future” (Princeton, New Jersey: Princeton University Press, 2012), 3.

6. E. Morozov, “To Save Everything, Click Here: The Folly of Technological Solutionism” (New York: PublicAffairs, 2013).

Reprint #:

63307

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (3)
Adam Wasserman
I very much enjoyed this article for its realism. Too many people are given completely unrealistic expectations of AI because of sensationalistic journalism. A point that I would like to make in addition to what the authors point out is this: There is absolutely no evidence that — no matter how powerful it may become — will ever be "human-like". There is a great deal more to human consciousness than computational ability. In fact, even the very best science understands very little at all about consciousness. This means that means that AI is no more likely to result in either utopia or hell than any other human technology, the printing press or the Internet as two relevant examples.
Milind Nadkarni
There are many organizations who are looking at AI endeavours as IT projects. These are not properly justified by business users too.
Vinod Nair
Unfortunately, organisations are looking forward to make a big story of using AI in their business and it also typically tends to have a positive impact on the share price.