What to Read Next
Already a member?Sign in
As I search online for a present for my mother, considering the throw pillows with sewn-in sayings, plush bathrobes, and other options, and eventually narrowing in on one choice over the others, who exactly has done the deciding? Me? Or the algorithm designed to provide me with the most “thoughtful” options based on a wealth of data I could never process myself? And if Mom ends up hating the embroidered floral weekender bag I end up “choosing,” is it my fault? It’s becoming increasingly difficult to tell, because letting AI think for us saves us the trouble of doing it ourselves and owning the consequences.
AI is an immensely powerful tool that can help us live and work better by summoning vast amounts of information. It spares us from having to undergo many mundane, time-consuming, nerve-wracking annoyances. The problem is that such annoyances also play a key adaptive function: They help us learn to adjust our conduct in relation to one another and the world around us. Engaging directly with a grocery bagger, for instance, forces us to confront his or her humanity, and the interaction (ideally) reminds us not to get testy just because the line isn’t moving as quickly as we’d like. Through the give and take of such encounters, we learn to temper our impulses by exercising compassion and self-control. Our interactions serve as a constantly evolving moral-checking mechanism.
Similarly, our interactions within the wider world of physical objects forces us to adapt to new environments. Walking, bicycling, or driving in a crowded city teaches us how to compensate for unforeseen obstacles such as varying road and weather conditions. On countless occasions every day, each of us seeks out an optimal compromise between shaping ourselves to fit the world and shaping the world to fit ourselves.1 This kind of adaptation has led us to become self-reflective, capable of ethical considerations and aspirations.
Our rapidly increasing reliance on AI takes such interactions out of our days. The frictionless communication AI tends to propagate may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse.
Read the Full ArticleAlready a subscriber? Sign in
1. R. Wollheim, The Thread of Life (New Haven, CT: Yale University Press, 1984).
2. M.J. Sandel, “The Case Against Perfection: What’s Wrong With Designer Children, Bionic Athletes, and Genetic Engineering,” Atlantic Monthly, April 2004, 1-11; and S. Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, U.K.: Oxford University Press, 2016).
3. B. Frischmann and E. Selinger, Re-Engineering Humanity (Cambridge, U.K.: Cambridge University Press, 2018).
4. G. Lukianoff and J. Haidt, The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure (New York: Penguin Random House, 2018).
5. D. Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).
6. J. Friedland and B.M. Cole, “From Homo-Economicus to Homo-Virtus: A System-Theoretic Model for Raising Moral Self-Awareness,” Journal of Business Ethics 155, no. 1 (March 2019): 191-205.
7. Brett Frischmann and Evan Selinger describe the six forms of disengagement in Re-Engineering Humanity.
8. N. Carr, The Glass Cage: How Our Computers Are Changing Us (New York: W.W. Norton & Co., 2015).
9. W. Knight, “Socially Sensitive AI Software Coaches Call-Center Workers,” MIT Technology Review, 2017, www.technologyreview.com.
10. P. Lin, “Why Ethics Matters for Autonomous Cars,” in M. Maurer, J.C. Gerdes, B. Lenz, et al., eds., Autonomous Driving: Technical, Legal, and Social Aspects (Berlin: Springer Open, 2016), 69-85.
11. Sandel, “The Case Against Perfection.”
12. Carr, The Glass Cage.
13. R.H. Thaler and C.R. Sunstein, Nudge: Improving Decisions About Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008).
14. K.G. Volpp, A.B. Troxel, S.J. Mehta, et al., “Effect of Electronic Reminders, Financial Incentives, and Social Support on Outcomes After Myocardial Infarction: The HeartStrong Randomized Clinical Trial,” JAMA Internal Medicine 177, no. 8 (August 2017): 1093-1101.
15. Vallor, Technology and the Virtues, 161.
16. B. Lamm, H. Keller, J. Teiser, et al., “Waiting for the Second Treat: Developing Culture-Specific Modes of Self-Regulation,” Child Development 89, no. 3 (June 2018): e261-e277.
17. T.W. Watts, G.J. Duncan, and H. Quan, “Revisiting the Marshmallow Test: A Conceptual Replication Investigating Links Between Early Delay of Gratification and Later Outcomes,” Psychological Science 29, no. 7 (May 2018): 1159-1177.
18. A. Duckworth and J.J. Gross, “Self-Control and Grit: Related but Separable Determinants of Success,” Current Directions in Psychological Science 23, no. 5 (October 2014): 319-325.
19. Vallor, Technology and the Virtues, 163.
20. Friedland and Cole, “From Homo-Economicus to Homo-Virtus.”
21. K. Aquino and A. Reed II, “The Self-Importance of Moral Identity,” Journal of Personality and Social Psychology 83, no. 6 (December 2002): 1423-1440.
22. S. Bowles, The Moral Economy: Why Good Incentives Are No Substitute for Good Citizens (New Haven, CT: Yale University Press, 2016).
23. K. Hwang and H. Kim, “Are Ethical Consumers Happy? Effects of Ethical Consumers’ Motivations Based on Empathy Versus Self-Orientation on Their Happiness,” Journal of Business Ethics 151, no. 2 (2018): 579-598.
24. J. Sadowski, S.G. Spierre, E. Selinger et al., “Intergroup Cooperation in Common Pool Resource Dilemmas,” Science and Engineering Ethics 21, no. 5 (October 2015): 1197-1215.
25. R.B. Cialdini, C.A. Kallgren, and R.R. Reno, “A Focus Theory of Normative Conduct: A Theoretical Refinement and Re-Evaluation of the Role of Norms in Human Behavior,” Advances in Experimental Social Psychology 24 (December 1991): 201-234.
26. J. Sadowski, T.P. Seager, E. Selinger, et al., “An Experiential, Game-Theoretic Pedagogy for Sustainability Ethics,” Science and Engineering Ethics 19, no. 3 (September 2013): 1323-1339.
27. A. Gopaldas, “Marketplace Sentiments,” Journal of Consumer Research 41, no. 4 (Dec. 1, 2014): 995-1014.
28. Hwang and Kim, “Are Ethical Consumers Happy?”
29. L. Columbus, “2017 Roundup of Internet of Things Forecasts,” Forbes, Dec. 10, 2017, www.forbes.com.
i. Kahneman, Thinking, Fast and Slow.
ii. M.R. Calo, “Against Notice Skepticism in Privacy (and Elsewhere),” Notre Dame Law Review 87, no. 3 (October 2013): 1027-1072.