Responsible AI / Panelist

Yasodara Cordova

Unico IDtech

Brazil

Yasodara (Yaso) Cordova is chief of privacy and digital identity research at Unico IDtech and a distinguished member of the investments committee of the Co-Develop fund. She has served as a senior fellow at the Digital Kennedy School at the Belfer Center, a fellow at the Berkman Klein Center for Internet & Society, and a Mason fellow at the Ash Center for Democratic Governance, all at Harvard University. She has also contributed to the World Bank’s governance sector as an innovation consultant and was CEO of Serenata de Amor Operation, an anti-corruption AI platform in Brazil. Cordova is a two-time recipient of the Vladimir Herzog Award in Journalism and Human Rights.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “I do not fully agree with the statement that there is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement responsible AI requirements across organizations. Although global companies may be subject to various international standards, local regulations often differ significantly. This inconsistency can lead companies to prioritize compliance with local laws over adopting global standards. As a result, companies might react to regulatory requirements rather than proactively implementing RAI practices. Furthermore, in regions where local laws or discussions about RAI are lacking, global companies may operate in ways that are misaligned with global standards until regulations are established.

Implementing RAI requires considerable effort and resources, and without a unified international framework that actually impacts in revenue, achieving effective RAI implementation across diverse jurisdictions remains challenging.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “The AI Act introduces novel concepts within its regulatory framework. Interpreting and translating these requirements into actionable engineering features is likely to be a significant challenge. Finding individuals who possess a deep understanding of both AI technology and regulatory compliance may prove difficult in the current job market.

Given the complexity of the compliance process and the intricacies involved in navigating the regulatory landscape, a time frame of 12 months may seem insufficient for many organizations to fully prepare and implement the necessary measures. The scale and complexity of the task ahead suggest that organizations will likely require more time and resources to effectively adapt to the requirements of the EU AI Act.

If incentives are provided to new companies or startups that specialize in navigating the regulatory challenges posed by the EU AI Act, the market may gradually adapt. However, even with such incentives, the process of adaptation is likely to take longer than 12 months. It will pose a significant challenge for many organizations, particularly those of medium to smaller sizes.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “Risk management is a domain expanding not only in capacity but also in scope. Organizations are maturing due to evolving regulations in various countries that bring financial and reputational risks closer to liability. Cybersecurity is adapting to privacy regulations — for example, giving rise to new disciplines such as privacy engineering. AI-related risks are likely to take a while to be absorbed, as happened with privacy. Progress in addressing AI-related risks is not keeping up with the required pace.

I mention privacy because many organizations, particularly in Latin America, are inadequately implementing proper data governance to comply with related regulations, despite this being a global issue. While some organizations excel in data integrity and quality, which is essential for mitigating AI-related harms, most are comfortable with extensive, unstructured data collection and sharing. Currently, minimal responsibility and diffuse liability prevail in this complex data flow ecosystem. If it required nearly a decade of regulations for organizations to begin enhancing their risk management capabilities for privacy, I am not optimistic that the situation will be different with AI.”