The Three Obstacles Slowing Responsible AI
Many organizations commit to principles of AI ethics but struggle to incorporate them into practice. Here’s how to bridge that gap.

Anna & Elena Balbusso/theispot.com
Many organizations intend to check their AI systems for fairness, accountability, and transparency but struggle to implement responsible AI processes due to structural and cultural obstacles or a lack of commitment. The authors recommend five strategies to get RAI efforts on track: Structure ownership at the project level, hardwire ethics into everyday procedures, align ethical risk with business risk, reward responsible behavior, and practice ethical judgment, not just compliance.
In October 2023, New York City released its AI action plan, publicly committing to the responsible and transparent use of artificial intelligence. The plan included guiding principles — accountability, fairness, transparency — and the creation of a new role to oversee their responsible implementation: the algorithm management and policy officer.
But by early 2024, New York’s AI ambitions were under scrutiny. It turned out that a chatbot deployed to provide regulatory guidance to small-business owners was prone to giving misleading advice. Reports revealed that the system was misinforming users about labor laws and licensing requirements and occasionally suggesting actions that could lead to regulatory violations.1 Observers questioned not only the technical accuracy of the system but also the city’s governance protocols, oversight mechanisms, and deployment processes. The episode became a cautionary tale, not only for public institutions but for any organization deploying AI tools at scale.
This case represents just one example of a broader pattern. Across industries, companies have embraced the language of responsible AI (RAI), emphasizing fairness, accountability, and transparency. Yet implementation often lags far behind ambition as AI systems continue to produce biased outcomes, defy interpretability and explainability requirements, and trigger backlash among users.
In response, regulators have introduced a wave of new policies, including the European Union’s AI Act, Canada’s Artificial Intelligence and Data Act, and South Korea’s updated AI regulations — all placing new pressures on organizations to operationalize transparency, safety, and human oversight.
Still, even among companies that understand the potential hazards, progress remains uneven, and they risk embedding error or bias into their processes or committing unexpected and possibly serious ethical violations at scale.
Mind the RAI Gaps
Through interviews with over 20 AI leaders, ethics officers, and senior executives across several industries, including technology, financial services, health care, and the public sector, we explored the internal dynamics that shape RAI initiatives. We found that in some cases, RAI frameworks serve as nothing more than reputational window dressing and organizations simply lack commitment to operationalizing recommended practices. But we also uncovered structural and cultural obstacles that can prevent organizations from translating their principles into sustainable practices. In particular, we identified three recurring gaps; below, we explore them and propose a set of practical strategies for bridging them.
1. The accountability gap.
References
1. C. Lecher, “NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup, March 29, 2024, https://themarkup.org.
2. R. Titah, “How AI Skews Our Sense of Responsibility,” MIT Sloan Management Review 65, no. 4 (summer 2024): 18-19.
3. C. O’Neil, J. Appel, and S. Tyner-Monroe, “Auditing Algorithmic Risk,” MIT Sloan Management Review 65, no. 4 (summer 2024): 30-37.
4. J. Friedland, D.B. Balkin, and K.O.R. Myrseth, “The Hazards of Putting Ethics on Autopilot,” MIT Sloan Management Review 65, no. 4 (summer 2024): 9-11.
Comment (1)
Sudheendra Rao