Responsible AI / Panelist

Johann Laux

Oxford Internet Institute

United Kingdom

Johann Laux is a British Academy postdoctoral fellow at the Oxford Internet Institute, studying the legal, social, and governmental implications of emerging technologies like artificial intelligence. He is currently researching implementing meaningful human oversight of AI. Previously, Laux was an Emile Noël fellow at New York University’s School of Law and a program affiliate with the school’s Digital Welfare State and Human Rights Project. He has a master’s degree in governance from the London School of Economics and Political Science and a master’s and doctorate in law from the University of Hamburg. He has also studied philosophy at King’s College London and was a visiting researcher at the University of California, Berkeley School of Law.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “For global companies, uncertainty about what RAI requires in different regions of the world is still rather high. Companies already know quite well which issues are relevant internationally, whether it is fairness, safety, or the potential replacement of human labor. However, the current standardization process in the EU shows just how difficult alignment between EU and international standards can be when it comes down to choosing concrete technical and managerial solutions.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “Companies should most certainly have to disclose whenever a consumer is interacting with an AI system — such as a generative AI-powered chatbot. Consumers should not be misled into believing that they are talking to or chatting with a person when in fact they are not. Otherwise, consumers may trust the interaction more than they would if they knew they were interacting with an AI.

Such disclosures should be made in plain English, not hidden away in terms and conditions, and should include whether the data collected in the interaction will be used to further train the AI system.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “Organizations will find themselves facing significant uncertainty about what the AI Act demands of them. They will struggle to translate the AI Act’s legal requirements into executable calls to action for providers and deployers of AI systems. For example, which AI-enabled techniques count as “manipulative” or “deceptive” is left vague in the legal text. As standardization and the proliferation of best practices under the AI Act progress, such difficulties will recede. Until then, it is important for organizations to use the two-year grace period between the AI Act’s entry into force and its applicability, and search for concretizing information on the interpretation of the AI Act relevant to their operations.

Organizations will, however, not only face uncertainty about the interpretation of the law, but also about how the AI systems they develop or use will behave in the future. Anticipating which risks may eventuate will therefore be difficult and further impede their ability to meet the requirements of the AI Act.”