Responsible AI / Panelist

Stefaan Verhulst


United States

Stefaan G. Verhulst, Ph.D., focuses on using advances in science and technology, including data and AI, to improve decision-making and problem-solving. He has cofounded several research organizations, including the Governance Laboratory (GovLab) at New York University and The Data Tank in Brussels. He is also editor of the open-access journal Data & Policy and has served as a member of several expert groups on data and technology, including the European Commission’s expert group on business-to-government data sharing and its high-level expert group on using private-sector data for official statistics. He is the author of several books and has been invited to speak at international conferences, including TED and the United Nations World Data Forum.

Voting History

Statement Response
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “To mitigate AI risks, organizations have started to enhance cybersecurity, update AI systems, and adopt adversarial training. Some are now also conducting bias audits, utilizing diverse data sets, and engaging multidisciplinary teams for more ethical AI. Additionally, the development of explainable AI and the integration of human oversight into AI systems are increasingly being considered to ensure transparency and dependability.

However, despite these efforts, a critical AI risk that is mostly overlooked or neglected is the need to secure a social license for AI usage. This involves aligning AI applications with societal values and expectations, particularly when repurposing data initially collected for different AI uses. The absence of such consideration can lead to public discontent and erode trust, undermining an organization’s broader social license to operate. Increasing participatory engagement with various stakeholders, including employees, customers, and experts, is essential to navigating these challenges effectively.”