The Hazards of Putting Ethics on Autopilot
Research shows that employees who are steered by digital nudges may lose some ethical competency. That has implications for how we use the new generation of AI assistants.
Topics
Frontiers
The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. But if they are not thoughtfully implemented, they risk diminishing employees’ decision-making competency, especially when ethics are at stake.
Our examination of the consequences of “nudging” techniques, used by companies to influence employees or customers to take certain actions, has implications for organizations adopting the new generation of chatbots and automated assistants. Companies implementing generative AI agents are encouraged to tailor them to increase managerial control. Microsoft, which has made copilots available across its suite of productivity software, offers a tool that enterprises can customize, thus allowing them to more precisely steer employee behavior. Such tools will make it much easier for companies to essentially put nudging on steroids — and based on our research into the effects of nudging, that may over time diminish individuals’ own willingness and capacity to reflect on the ethical dimension of their decisions.
Get Updates on Transformative Leadership
Evidence-based resources that can help you lead your team more effectively, delivered to your inbox monthly.
Please enter a valid email address
Thank you for signing up
AI-based nudges may be particularly persuasive, considering the emerging inclination among individuals to discount their own judgments in favor of what the technology suggests. At its most pronounced, this abdication of critical thinking can become a kind of techno-chauvinistic hubris, which discounts human cognition in favor of AI’s more powerful computational capacities. That’s why it will be particularly important to encourage employees to maintain a constructively critical perspective on AI output and for managers to pay attention to opportunities for what we call ethical boosting — behavioral interventions that utilize mindful reflection, as opposed to mindless reaction. This can help individuals grow in ethical competence, rather than allowing those cognitive skills to calcify.
Digital nudges, especially in the form of salient incentives and targets, can lead to subtle motivational displacement by obfuscating the ultimate aims of the team or organization and shifting proximal goals. When a performance measure becomes the main objective, it ceases to function as an effective measure, a phenomenon known as Goodhart’s law. For example, copilots might be designed to nudge customer-facing workers to maintain five-star ratings by offering bonus points or financial rewards.