Responsible AI / Panelist

Rainer Hoffmann

EnBW

Germany

Rainer Hoffmann is the chief data officer at EnBW, where he drives the scaling of data and AI initiatives toward shaping a sustainable energy future. Concurrently, he is an adjunct lecturer at the Karlsruhe Institute of Technology (KIT), where he teaches the course Responsible Artificial Intelligence. Previously at EnBW, he was accountable for the Data & Analytics Excellence program, was a data scientist in energy trading, and established algorithmic trading. Hoffmann is an industrial engineer and has a doctorate in stochastic optimization from KIT and a professional certificate in AI from MIT.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “Should the AI Act be enacted this year, only select provisions will become effective over the following 12 months. Notably, Article 5, which outlines prohibited AI practices, will warrant particular attention in the coming year. I expect that many organizations will undertake reviews to determine whether they are currently utilizing any AI systems that fall under these prohibitions or whether they have plans to implement such systems. Moreover, I expect that these organizations will implement safeguards or compliance mechanisms to prevent the deployment of any AI systems that violate these restrictions in the future.

However, full compliance with the AI Act’s requirements within a single year seems impossible, notably for large organizations with extensive AI deployments. Such entities face multiple challenges due to the AI Act: achieving transparency across myriad AI use cases organizationwide, discerning which systems will be under the act’s purview, interpreting and adapting to still-ambiguous requirements, and creating an oversight mechanism to consistently evaluate every new AI introduction for conformity.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “My insights into the European AI ecosystem show that companies have yet to prioritize risk management specifically for AI systems. Traditionally, risk management has been viewed as a subset of the responsibilities held by product managers and developers within their daily tasks. Additionally, while considerations for information security risks are made, these represent merely a fraction of the potential risks associated with AI.

However, with the introduction of the European AI Act, which mandates risk management for high-risk applications, organizations are beginning to acknowledge the importance of AI-related risk considerations. Despite this progress, the act’s focus on high-risk applications alone leaves me skeptical about the comprehensive addressal of AI-related risks across all applications.”