Responsible AI / Panelist

Renato Leite Monteiro

e&; Oxford Internet Institute

United Arab Emirates

Renato Leite Monteiro is the vice president for privacy, data protection, and AI at e&, formerly known as Etisalat, a global technology company headquartered in the United Arab Emirates. He was formerly Twitter/X’s global data protection officer and global head of privacy. Leite is also a visiting fellow at the Oxford Internet Institute (OII), where he aims to foster collaboration between industry, academia, government, and civil society about accountability, transparency obligations, and the right to explanation in AI systems, which was the focus of his Ph.D research.

Monteiro has provided several testimonies to the Brazilian Federal Senate regarding the Brazilian Bill on AI. His contributions mainly focused on transparency obligations and the right to explanation. Renato also actively collaborated with the discussions and drafting of the General Data Protection Law of Brazil(LGPD). He has also contributed to the discussions and drafting of privacy, data protection, and AI regulation in the EU, Asia, LatAm, and the U.S.

Monteiro cofounded Data Privacy Brasil, Brazil’s leading privacy and data protection research center. He was recognized as one of the global Thought Leaders in privacy and selected for the Global Data Review’s 40 under 40.

Voting History

Statement Response
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Agree “General-purpose AI producers should be held accountable for how their products are developed, but the extent of this accountability varies across jurisdictions. Some regions have implemented AI regulations, privacy laws, or product liability frameworks that impose obligations on developers, ensuring that they mitigate risks and address potential harms. However, in the absence of such laws, accountability mechanisms are often fragmented or nonexistent.

However, responsible AI development demands a balanced approach where regulation supports both innovation and safety. Companies should implement robust safeguards and risk-mitigation strategies during development, regardless of current legal requirements. This includes thorough testing, bias detection, safety measures, and transparency about system capabilities and limitations. The regulatory framework must evolve to establish clear accountability standards while maintaining sufficient flexibility to accommodate technological advancement. The goal should be fostering an environment where AI innovation can flourish within boundaries that ensure public safety, ethical development, and responsible deployment of these transformative technologies.”