Carolina Aguerre is an associate professor at Universidad Católica del Uruguay, co-director of the Center for Studies in Technology and Society (CETyS) at Universidad de San Andrés (Argentina), and director of DiGI, a program focused on internet governance in Latin America. Her research examines the intersection of digital technology governance and global, regional, and national policies. She previously led CETyS’s GuIA.ai initiative on AI ethics, governance, and policies in Latin America and was an associate researcher at the University of Duisburg Essen’s Centre for Global Cooperation. She holds a doctorate in social science from the University of Buenos Aires and a master’s degree from the University of London’s Goldsmiths College.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree | “Though there are relevant RAI codes and standards, there is not global consensus on which are the most safe, robust, and applicable ones. This is part of the ongoing negotiations, tensions, and differences between different countries and regions, including the view of the U.S. and its standards based on NIST’s AI Risk Management Framework, and those that are being discussed in the EU, to name but two.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree | “Transparency and accountability are two of the most widely required principles of AI systems, particularly when they are embedded into different services and products. Citizens, users, and consumers need to be able to upgrade their understanding and knowledge of AI as it becomes more widespread. They also have a right to know about the components and processes of their purchases. Many products and services are labeled for sustainability, climate, fair trade, or nutritional value, and, similarly, as AI becomes involved in the decisions of corporations, users need to be aware of how these technologies are involved in the production of a good or service that relies on the classification of data — sometimes personal and/or sensitive, but always constructed from a certain viewpoint. Companies need to include transparency requirements on the use of AI systems as part of their social responsibility and ethics mandates, as including AI has widespread effects on society and consumers. The adherence to international norms, such as ISO 42001:2023, and including this certification is one way in which companies should consider implementing this.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“Organizations are not yet sufficiently expanding risk management capabilities to address risks coming from AI, particularly in emerging economies. Firstly, there is not enough awareness of the risks involved in deploying some AI systems within organizations — that is, the demand for these AI systems is not yet comprehensively articulated to contemplate a risk assessment that not only takes into account technical risks but also legal and ethical risks. Secondly, how suppliers of AI systems comprehensively address the risks involved in the systems that they are developing is essential when the demand side is not sufficiently capable or aware of how to address this.
A more comprehensive approach to risk management is one that is also able to take into account a risk assessment that incorporates the systemic view: In other words, when an organization deploys an AI system, is it only considering efficacy and efficiency concerns, or is it also able to address wider societal concerns from these practices (such as automation and job replacement, climate-related consequences, and surveillance)?” |