Responsible AI / Panelist

Johann Laux

Oxford Internet Institute

United Kingdom

Johann Laux is a British Academy postdoctoral fellow at the Oxford Internet Institute, studying the legal, social, and governmental implications of emerging technologies like artificial intelligence. He is currently researching implementing meaningful human oversight of AI. Previously, Laux was an Emile Noël fellow at New York University’s School of Law and a program affiliate with the school’s Digital Welfare State and Human Rights Project. He has a master’s degree in governance from the London School of Economics and Political Science and a master’s and doctorate in law from the University of Hamburg. He has also studied philosophy at King’s College London and was a visiting researcher at the University of California, Berkeley School of Law.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “Organizations will find themselves facing significant uncertainty about what the AI Act demands of them. They will struggle to translate the AI Act’s legal requirements into executable calls to action for providers and deployers of AI systems. For example, which AI-enabled techniques count as “manipulative” or “deceptive” is left vague in the legal text. As standardization and the proliferation of best practices under the AI Act progress, such difficulties will recede. Until then, it is important for organizations to use the two-year grace period between the AI Act’s entry into force and its applicability, and search for concretizing information on the interpretation of the AI Act relevant to their operations.

Organizations will, however, not only face uncertainty about the interpretation of the law, but also about how the AI systems they develop or use will behave in the future. Anticipating which risks may eventuate will therefore be difficult and further impede their ability to meet the requirements of the AI Act.”