GenAI Tools and Decision-Making: Beware a New Control Trap

Generative AI tools promise to help leaders make better decisions. But they may also cause trouble by nudging leaders toward a control-based style, research shows.

Reading Time: 8 min 

Topics

Permissions and PDF Download

As artificial intelligence technologies develop, managers are striving to reap the benefits. Today’s generative AI tools can aid managers in strategic decision-making and assist with problem-solving in a variety of contexts, ranging from product development to employee conflicts. ChatGPT — a common GenAI tool — is even being used as a debating partner for managerial decision-making processes.

At the same time, interacting with technology as part of a decision-making or problem-solving process is fundamentally different from consulting with humans. AI systems, by design, are focused on efficiency, predictability, and data-driven solutions. This emphasis is where leaders can get into unintended trouble.

Our latest research suggests that when managers interact with GenAI tools to help make decisions, the tools may inadvertently nudge them toward a more rigid and mechanistic approach. Specifically, our study reveals that when managers used ChatGPT to assist with solving a problem related to employee behavior and working conditions, they were more likely to propose control-oriented rather than people-oriented solutions.

For decades, researchers and managerial practitioners have detailed the benefits of human-centric management approaches. Our findings caution that without proper consideration, the use of generative AI tools may risk an unintended return to a more mechanistic and control-based management style. That’s a problem, since research has established that the old command-and-control style of management doesn’t breed employee engagement or trust.

A Management Dilemma: ChatGPT Versus Human Experience

In our study, we employed an experimental design where participants (MBA students) were assigned to read a vignette and assume they were managers working for Amazon’s delivery division. (See “The Research.”) In the case scenario, the manager is told that drivers are not complying with the mandatory use of a phone application that monitors their driving performance and tracks behaviors such as hard braking, speeding, and phone use.

As outlined in the vignette, the drivers say they feel that the app is too invasive and sometimes unfair because it doesn’t allow them to explain the context around what the app is tracking. They express their increased stress and anxiety to the manager, citing the app’s inaccuracies and the pressure to meet strict delivery deadlines. The manager learns that because of the stress and pressure, drivers have been turning off the app and not complying with usage mandates; this noncompliance is problematic because the data from the app is required for organizational safety and efficacy reporting.

After assessing the case, study participants were asked to propose managerial solutions for this challenge. In one group, participants were instructed to consult ChatGPT to obtain insights on how to address the problem before providing solutions. In the control group, participants were asked to provide solutions to the issue based only on their own reflections and experience — and not to use ChatGPT.

The participants of both groups were asked to elaborate on three solutions to the organizational problem. Importantly, even the group that interacted with ChatGPT for insights had to provide three of its own “free-write” solutions to the managerial problem (though we, as researchers, could still see the output from their ChatGPT exchanges).

The findings were clear: When managers consulted with ChatGPT before proposing solutions, the tool intensified their focus on control and surveillance-based solutions, often at the expense of driver autonomy and well-being. Specifically, managers who engaged with ChatGPT were about two times more likely to propose control-based solutions, such as punishing drivers for not using the tracking app, adding monitoring cameras, hiring external auditors to ensure compliance, and encouraging peer reporting of transgressions.

The control group participants, on the other hand, were more likely to propose solutions related to listening to and negotiating with drivers about better working conditions and/or reassessing drivers’ workloads.

Managers who engaged with ChatGPT were about two times more likely to propose control-based solutions, such as punishing drivers for not using the tracking app.

Further, when we reviewed the ChatGPT interactions, we found that even when the tool suggested solutions that were more humanistic as part of the back-and-forth with participants, those participants ultimately still proposed more solutions that focused on control, monitoring, and efficiency compared with the control group’s proposals. This suggests that the mere use of the technology itself — and not necessarily the content of its suggestions — primes people to give control-based versus people-centric responses.

Avoiding the ‘Control Trap’

Generative AI tools can play a beneficial role in managerial decision-making. But our findings caution that if organizations are not careful about how such tools are incorporated into managerial problem-solving, they risk harkening back to the days of “scientific management” — focusing more on efficiency and control than on practices that take the people and the context into consideration.

Our research suggests that managers should take extra care with generative AI when the problem-solving issue has implications for working conditions and employee well-being. In such cases, interacting with GenAI might create psychological distance between managers and the employees involved.

Consider these key managerial takeaways from our work:

1. Beware the priming effect of generative AI tools.

Consulting with the GenAI tool shifted managers’ attention away from direct experience and human-centric thinking and primed them to use a more detached and analytical mode of thinking.

Thus, the use of generative AI may create an expectation that solutions to problems should be “advanced” or technology-driven, leading to a preference for control and monitoring solutions that leverage similar technological capabilities. Managers must become more conscious of their hidden biases and mindful of not only the content provided by AI tools but also the fact that using GenAI tools might shape a person’s approach to problem-solving.

Takeaway: Leaders should actively acknowledge the priming that could occur when enlisting generative AI as a problem-solving “partner” in human-centric challenges. When using GenAI for decision support in people management, they should adopt a decision framework that actively incorporates employee welfare and humanistic factors.

2. Understand that generative AI tools can breed moral disengagement.

Our results indicate that it is possible that GenAI’s data-driven nature may lead managers to view employees more as data points than individuals with unique needs and circumstances.

Indeed, for managers using GenAI tools, control-based solutions might be perceived as more straightforward to implement and manage compared with more nuanced, human-centric alternatives. This situation can foster moral disengagement — the process wherein individuals disconnect from their moral compass and values and displace responsibility for decisions onto a third-party entity as opposed to holding one’s own self accountable.

To avoid this outcome, managers must intentionally seek out employees’ firsthand perspectives on problems. Whenever possible, they should contextualize the situation from an employee’s perspective to gain insights that numbers and data alone can’t provide.

Takeaway: Keep the humans in the loop: Managers should regularly engage with employees and seek their input in order to balance GenAI-driven suggestions with insights from employees’ real-world experiences.

3. When using generative AI tools, overemphasize transparency.

Many employees, like the Amazon delivery drivers in our case study, are unaware of how AI tools influence their company’s decisions about their performance and work schedules. This lack of transparency can lead people to try and game the system. When organizations leave employees in the dark about how AI systems judge their performance, determine their schedules, or adjust their working conditions, leaders may foster an environment of mistrust and thus experience unintended consequences.

This type of communication gap can also lead to heightened stress among workers, who may feel that they are being judged by an inscrutable and potentially unfair system. Without a clear understanding of the metrics and reasoning behind AI-driven evaluations, employees will likely feel unmotivated to improve their performance. Organizations must prioritize transparency and clear communication channels about AI usage, disclose leaders’ rationales for decisions, and create mechanisms for employee feedback and appeals.

Takeaway: Leaders should implement clear communication and disclosure policies regarding when and how GenAI is being used and what data is being considered in managerial decisions.


In the years since the Industrial Revolution, organizations have expanded their thinking beyond the efficiency-minded scientific management principles that Frederick Winslow Taylor espoused. Leaders have progressively refined management approaches to balance productivity with employee well-being and satisfaction. Research has demonstrated the many performance and productivity benefits that come from human-centric management practices such as giving people increased autonomy, establishing self-managed work groups, and valuing workers’ input regarding motivation and working conditions.1 Our research shows that when managers use generative AI for decision-making, the tools can lead them down an undesired path to a control-oriented management style.

The challenge for leaders will be to harness the power of GenAI tools while staying true to what we have learned about the importance of employee autonomy, engagement, and well-being. This balance will require ongoing dialogue between managers and employees (and even GenAI tool developers) to ensure that technological advancements serve to enhance, rather than undermine, the human element in organizations.

Topics

References

1. P.E. Spector, “Perceived Control by Employees: A Meta-Analysis of Studies Concerning Autonomy and Participation at Work,” Human Relations 39, no. 11 (November 1986): 1005-1016; and T.W.H. Ng and D.C. Feldman, “Employee Voice Behavior: A Meta-Analytic Test of the Conservation of Resources Framework,” Journal of Organizational Behavior 33, no. 2 (February 2012): 216-234.

Reprint #:

66231

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.