Responsible AI / Panelist

Carolina Aguerre

Universidad Católica del Uruguay

Uruguay

Carolina Aguerre is an associate professor at Universidad Católica del Uruguay, co-director of the Center for Studies in Technology and Society (CETyS) at Universidad de San Andrés (Argentina), and director of DiGI, a program focused on internet governance in Latin America. Her research examines the intersection of digital technology governance and global, regional, and national policies. She previously led CETyS’s GuIA.ai initiative on AI ethics, governance, and policies in Latin America and was an associate researcher at the University of Duisburg Essen’s Centre for Global Cooperation. She holds a doctorate in social science from the University of Buenos Aires and a master’s degree from the University of London’s Goldsmiths College.

Voting History

Statement Response
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “Organizations are not yet sufficiently expanding risk management capabilities to address risks coming from AI, particularly in emerging economies. Firstly, there is not enough awareness of the risks involved in deploying some AI systems within organizations — that is, the demand for these AI systems is not yet comprehensively articulated to contemplate a risk assessment that not only takes into account technical risks but also legal and ethical risks. Secondly, how suppliers of AI systems comprehensively address the risks involved in the systems that they are developing is essential when the demand side is not sufficiently capable or aware of how to address this.

A more comprehensive approach to risk management is one that is also able to take into account a risk assessment that incorporates the systemic view: In other words, when an organization deploys an AI system, is it only considering efficacy and efficiency concerns, or is it also able to address wider societal concerns from these practices (such as automation and job replacement, climate-related consequences, and surveillance)?”