Organizations Face Challenges in Timely Compliance With the EU AI Act

A panel of experts weighs in on whether organizations are positioned to meet the requirements of the EU AI Act.

Reading Time: 10 min 

Topics

Responsible AI

The Responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.

In collaboration with

BCG
More in this series

For the third year in a row, MIT Sloan Management Review and Boston Consulting Group have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented in organizations worldwide. Last year, we published a report titled “Building Robust RAI Programs as Third-Party AI Tools Proliferate.” This year, we continue to examine organizational capacity to address AI-related risks in a landscape that includes the first comprehensive AI law on the books — the European Union’s AI Act.

In our previous post, we asked our experts about organizational risk management. This month, we asked them to react to the following provocation: Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Our experts are divided: While nearly half (47%) disagree or strongly disagree with the statement, a third (33%) neither agree nor disagree, and only a fifth (20%) agree or strongly agree.

Below, we share insights from our panelists and draw on our own observations and experience working on RAI initiatives to offer recommendations on how organizations might approach compliance with the AI Act’s requirements as they’re phased in over the next year.

Organizations May Struggle With the Timeline for Compliance

Although the AI Act’s requirements will be phased in over time, the timeline for compliance is nevertheless aggressive. As OdiseIA’s Idoia Salazar explains, “The first phase of the AI Act corresponding to prohibited AI systems comes into force in six months, generative AI systems in 12 months, and requirements for most high-risk systems in two years,” giving organizations up to two years to comply. But Richard Benjamins of OdiseIA observes that “two years is just about the minimum an organization needs to prepare for the AI Act, and many will struggle to achieve this.” And EasyJet’s Ben Dias points out that “some basic obligations will start to apply in 2024, which some organizations may not be prepared for.” Automation Anywhere’s Yan Chow concurs, asserting that “the six-month timeline for high-risk product compliance will pose challenges for developers.”

Some panelists anticipate that organizations of all sizes will face challenges. Yasodara Cordova of Unico IDtech contends that, “given the complexity of the compliance process and the intricacies involved in navigating the regulatory landscape, a time frame of 12 months may seem insufficient for many organizations to fully prepare and implement the necessary measures … particularly those organizations of medium to smaller sizes.” At the same time, Rainer Hoffmann of EnBW argues that “full compliance with the AI Act’s requirements within a single year seems impossible, notably for large organizations with extensive AI deployments.” Such companies may struggle with “achieving transparency across myriad AI use cases organizationwide, discerning which systems will be under the act’s purview, interpreting and adapting to still-ambiguous requirements, and creating an oversight mechanism to consistently evaluate every new AI introduction for conformity,” he adds.

Disagree

“The EU AI Act represents a groundbreaking initiative to create a detailed regulatory framework for artificial intelligence systems. Yet, the ambitious timeline for compliance within the next 12 months presents considerable challenges for organizations of all sizes.”

Rohan Rajput
Rohan Rajput
Headspace

Although there is an aggressive timeline for compliance, there will be a grace period for enforcement, several experts note. The Ada Lovelace Institute’s Andrew Strait observes that “the European AI Office has been clear that it will take an approach akin to GDPR [General Data Protection Regulation] compliance, where it provided a grace period before enacting fines and taking enforcement action.” And Harman’s Tom Mooney agrees: “It took years for GDPR to ramp up its enforcement mechanism, which, if something similar plays out with the AI Act, could buy companies more time to navigate a compliance approach.” That’s why “it is important for organizations to use the two-year grace period between the AI Act’s entry into force and its applicability, and search for concretizing information on the interpretation of the AI Act relevant to their operations,” argues Johann Laux of the Oxford Internet Institute.

Putting the Act Into Practice Will Require Considerable Expertise

Several experts cite interpretation of the act’s requirements as a hurdle to timely compliance. Aboitiz Data Innovations’ David R. Hardoon observes that “the readiness of organizations to meet the requirements of the EU AI Act will depend on the clarity of the requirements as well as definitions and penalties involved.” Cordova adds, “The AI Act introduces novel concepts. … Interpreting and translating these requirements into actionable engineering features is likely to be a significant challenge.” Laux agrees that organizations will face “significant uncertainty about what the AI Act demands of them [and] struggle to translate the AI Act’s legal requirements into executable calls to action for providers and deployers of AI systems.” Harvard Business School’s Katia Walsh contends that “one of the hardest requirements to meet would be that for transparency of AI algorithms.” Harvard Business School professor Ayelet Israeli cites risk classification as another example: “There may be systems we consider limited/minimal risk because we do not yet understand their consequences, which may make them potentially high risk.”

Disagree

“The EU AI Act calls on us to do something very useful — create guardrails for risky uses of AI. But we still have a long way to go to figure out what that means in practice, in real products, in actual systems. My guess is it takes the better part of a decade to sort that through on the ground.”

Mark Surman
Mark Surman
Mozilla Foundation

AI competence and expertise will be critical to translating the act’s ambiguous requirements into practice. As GovLab’s Stefaan Verhulst asserts, “Understanding and complying with such a framework that is massively complicated requires not only legal and ethical expertise but also the ability to integrate these considerations into the AI systems themselves.” Teddy Bekele of Land O’Lakes adds that “organizations currently exhibit varying levels of maturity not only in their technological capabilities but also in their employees’ understanding of AI, which will influence their readiness to meet the requirements of the EU AI Act.” And ForHumanity’s Ryan Carrier agrees that implementation requires “substantial expertise,” arguing there are “not enough qualified individuals in the world” who are up to the task. Bekele predicts that, as a result, “the journey toward full compliance is likely to be iterative and complex, evolving as organizations better understand both the technology and the legal landscape.”

A Foundation of Governance Can Help With Compliance

Organizations with a foundation of governance in place will likely fare better in meeting the AI Act’s timeline for compliance. Zeiss’ Simone Oldekop argues that the readiness of an organization depends on “how well [it] is already prepared in the underlying compliance fields, such as privacy and cybersecurity, and is able to build upon existing governance structures.” For that reason, Verhulst cautions that “companies that are already behind in their AI journeys may find it daunting to navigate these new regulations.” In contrast, Var Shankar of the Responsible AI Institute contends that “leading organizations already have the foundations in place to meet the compliance requirements of the EU AI Act — such as governance, definitions, and tooling,” particularly “in highly regulated industries.” Simon Chesterman of the National University of Singapore agrees that “established players with compliance teams should be fine.”

Neither agree nor disagree

“Organizations that have already invested in responsible AI programs will likely have an advantage for compliance, having spent previous years grappling with ethical implications and quandaries.”

David Polgar
David Polgar
All Tech Is Human

Still, that doesn’t mean it will be easy for any organization. Headspace’s Rohan Rajput anticipates that even “larger entities, while benefiting from more robust governance structures, will still need to undertake significant modifications to comply with the detailed stipulations of the act, such as improving data quality, conducting algorithmic audits, and implementing continuous monitoring.” And Chesterman predicts that implementation “will be significantly messier than the entry into force of the GDPR, which at least related to a reasonably well-defined set of activities.” But Triveni Gandhi of Dataiku offers cause for hope, noting that “more and more organizations are putting the right structures, processes, and tooling into place to support adherence to the upcoming regulation.”

Recommendations

In sum, for organizations seeking to meet the EU AI Act’s requirements as they’re phased in, we recommend the following:

1. Determine your pace of compliance. Organizations will need to balance the urgency to comply with the forthcoming regulations, the recognition that they will have a transition period before regulatory enforcement ramps up, and the amount of work necessary to reach compliance. Stakeholders will likely have different points of view about the pace of compliance based on how they perceive the two-year transition period, during which the implementation requirements are staggered. Creating a consensus and shared purpose among stakeholders becomes crucial to aligning on compliance deadlines and achieving the necessary pace of implementation to meet them. Organizations whose steering committees extend beyond the compliance function may be more effective at achieving this balance than those in which the compliance function is the sole arbiter handing down edicts to the rest of the organization.

2. Implement a responsible AI program. The aggressive timeline for compliance, coupled with the uncertainty of the AI Act’s requirements and the need for expertise in translating them, emphasizes the benefits of having an RAI foundation in place. Governance, in this case, does not mean the compliance function alone. Government liaisons, technologists, data privacy experts, legal professionals, and responsible AI stakeholders all need to be involved to ensure timely and effective responses to the regulations. Create a process to determine the governance structure, processes, and tools and resources that are appropriate for your organization — whether it involves an expansion of existing governance structures and/or developing new mechanisms. Whatever its form, this process will likely produce valuable insights about how your strategy for AI reflects and advances your company’s overall purpose and values.

3. Educate AI teams. Complying with AI regulations requires AI expertise. But AI expertise is already in short supply, expensive, and typically focused on value creating opportunities. Reallocating AI expertise to regulatory compliance may be viewed as a costly diversion of resources. However, this may not be as big a trade-off as it seems. Educating AI teams about the law and its implications enables AI experts to ensure regulatory compliance by design in their AI solutions. This will advance the company’s RAI agenda and enrich its ethical approach. Adopting a mindset that recognizes that AI regulations are both externally imposed and internally valuable can expand how organizations view their investments in regulatory compliance.

4. Build testing and evaluation capabilities. A key aspect of the AI Act’s requirements is ensuring that AI solutions are adequately tested and evaluated. This includes evaluating underlying data and testing AI systems for accuracy, robustness, harms, and cybersecurity vulnerabilities. AI testing and evaluation, particularly of generative AI systems, remains a nascent area, and many organizations will lack the needed expertise, tools, and processes. Organizations should begin taking steps to upskill AI teams, engage additional expertise, and identify appropriate tools and processes as part of their broader RAI program implementations.

Topics

Responsible AI

The Responsible AI initiative looks at how organizations define and approach responsible AI practices, policies, and standards. Drawing on global executive surveys and smaller, curated expert panels, the program gathers perspectives from diverse sectors and geographies with the aim of delivering actionable insights on this nascent yet important focus area for leaders across industry.

In collaboration with

BCG
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.