Responsible AI / Panelist

Stefaan Verhulst

GovLab

United States

Stefaan G. Verhulst, Ph.D., focuses on using advances in science and technology, including data and AI, to improve decision-making and problem-solving. He has cofounded several research organizations, including the Governance Laboratory (GovLab) at New York University and The Data Tank in Brussels. He is also editor of the open-access journal Data & Policy and has served as a member of several expert groups on data and technology, including the European Commission’s expert group on business-to-government data sharing and its high-level expert group on using private-sector data for official statistics. He is the author of several books and has been invited to speak at international conferences, including TED and the United Nations World Data Forum.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “Most companies within the European Union face significant challenges in preparing to effectively apply artificial intelligence in a manner that adds true value to their organization and to society and is fit for purpose. While sectors such as finance and health care have made more progress, many other sectors lag due to varying capabilities and resource availability. Small and medium-size enterprises, in particular, struggle with the computational and data demands and the high costs associated with implementing AI technologies. This uneven readiness is exacerbated by a lack of in-house expertise and difficulties in attracting skilled data and AI talent, which are crucial for developing and managing AI-driven projects. Consequently, these companies may find themselves unprepared not just in adopting AI but also in complying with the AI Act.

Those companies that are already behind in their AI journeys may find it daunting to navigate these new regulations. Understanding and complying with such a framework that is massively complicated requires not only legal and ethical expertise but also the ability to integrate these considerations into the AI systems themselves.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “To mitigate AI risks, organizations have started to enhance cybersecurity, update AI systems, and adopt adversarial training. Some are now also conducting bias audits, utilizing diverse data sets, and engaging multidisciplinary teams for more ethical AI. Additionally, the development of explainable AI and the integration of human oversight into AI systems are increasingly being considered to ensure transparency and dependability.

However, despite these efforts, a critical AI risk that is mostly overlooked or neglected is the need to secure a social license for AI usage. This involves aligning AI applications with societal values and expectations, particularly when repurposing data initially collected for different AI uses. The absence of such consideration can lead to public discontent and erode trust, undermining an organization’s broader social license to operate. Increasing participatory engagement with various stakeholders, including employees, customers, and experts, is essential to navigating these challenges effectively.”