Responsible AI / Panelist

Idoia Salazar

OdiseIA

Spain

Idoia Salazar, Ph.D., is founder and president of the Observatory of the Social and Ethical Impact of Artificial Intelligence (OdiseIA) and a professor at CEU San Pablo University. She is currently working on Spain’s AI certification initiative and the configuration and development of its AI Regulatory Sandbox. She is principal investigator of CEU’s Social Impact of Artificial Intelligence and Robotics research group, focusing on a multicultural approach to ethics in AI. Salazar has written four books and numerous articles about the impact of AI. She is a founding member of AI and Ethics journal, a member of the Global AI Ethics Consortium, and a member of the International Group of Artificial Intelligence’s global advisory board.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree “The first phase of the AI Act corresponding to prohibited AI systems comes into force in six months, generative AI systems in 12 months, and requirements for most high-risk systems in two years. I agree that companies that want to operate in Europe have to start thinking about aligning their policies with the AI Act, but they have two years to do so.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “I disagree with the statement that organizations are adequately expanding their risk management capabilities to address AI-related risks. While it’s true that some leading technology companies and highly regulated sectors have begun to implement AI-specific risk management frameworks and practices, the reality is that most organizations are still in the early stages of understanding and adapting to the unique challenges presented by AI. The rapid expansion of this technology, along with its increasing integration into various business and social operations, often surpasses the current risk management capabilities of organizations. This is due to several factors, including the lack of widely accepted standards for AI risk assessment, a shortage of experts knowledgeable in both AI and risk management, and the underestimation of the complexity and potential impact of the risks associated with implementing AI systems. In conclusion, although there are advancements heading in the right direction, there is still much to be done for organizations to expand their risk management capabilities to a level that effectively addresses the risks associated with AI.”