Responsible AI / Panelist

Bruno Bioni

Data Privacy Brasil

Brazil

Bruno Bioni is the founding director of Data Privacy Brasil, a research and education organization focused on privacy and data protection issues. He is also a member of the National Data Protection Council (CNPD), co-chair of the Inclusive Digital Transformation Task Force at T20, and a professor at ESPM and IDP-SP.

Bioni is a full member of the Brazilian National Data Protection Council and the Advisory Group on AI, Data Protection and Misinformation of the Brazilian Electoral Superior Court. In 2022-23, he was on a federal senate commission of jurists responsible for proposing AI regulations for Brazil. He holds a doctorate in commercial law and a master’s degree in civil law from the University of São Paulo Law School.

Bioni has studied at the European Data Protection Board (EDPB) and the Department of Personal Data Protection of the Council of Europe (CoE), as well as a visiting researcher at the Centre for Law, Technology and Society at University of Ottawa’s Faculty of Law.

Bioni is the author of the books Personal Data Protection: The Role and Limits of Consent and Regulation and Data Protection: The Principle of Accountability. He was a member of the Federal Senate Commission of Jurists on Artificial Intelligence, the Committee on Digital Integrity and Transparency Studies on Internet Platforms of the Superior Electoral Court (TSE), and served as co-chair of the Inclusive Digital Transformation Task Force at T20. Currently, he is a member of the National Data Protection Council (CNPD) and a professor at ESPM and IDP-SP.

Voting History

Statement Response
Effective human oversight reduces the need for explainability in AI systems. Strongly disagree “Explainability and human oversight constitute complementary and intersecting safeguards within AI governance frameworks. Their interrelation, however, does not render them mutually exclusive, nor does the presence of one negate or diminish the relevance of the other. On the contrary, particularly in high-risk contexts, these mechanisms are intended to be integrated and mutually reinforcing. In this regard, the explainability of a system and its decision-making processes frequently serves as an enabler of effective human oversight. In fact, lacking explainability by design can severely compromise both meaningful human intervention and what we increasingly refer to as informational due process in automated decision-making.”
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Agree “Artificial intelligence is not a static tool but rather a dynamic and adaptable system that, throughout its life cycle, can take on different characteristics in response to interventions by various actors. The so-called foundational models, which undergo goal-oriented refinement processes (fine-tuning), illustrate this complexity. In this context, assigning civil liability to AI developers cannot be reduced to a binary or simplistic approach, especially since different agents may exert varying degrees of influence over the system’s design and behavior. Liability should instead be analyzed in light of principles such as the duty of care, the foreseeability of risks, and the implementation of appropriate technical and legal safeguards — a precautionary framework proportional to the degree of control and influence each actor has over the system. Measures such as algorithmic auditing and transparency in development processes are essential to strengthening public scrutiny and chiefly ensuring prevention rather than compensation for the damages caused by AI.”
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “While it is true that there are noteworthy international initiatives regarding AI codes of conduct, such as those promoted by the G7, there is also a noticeable fragmentation in the global governance conversation. From the OECD to UNESCO and the UN (with its AI high-level panel and Global Digital Compact), and extending to the G20, these multilateral policy forums play a critical role in setting agendas, implementing principles, and facilitating information sharing. However, this fragmented landscape impacts both the content and the movement toward global convergence in governance, particularly in framing and enforcing AI codes of conduct. To address this, the T20 has proposed the creation of a D20 — a pivotal coordination point for discussions on data governance, a fundamental component of AI governance.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree “As a rule, transparency must be seen as a key and transversal obligation, regardless of the level of AI risk. By combining already existing laws (e.g., data protection, consumer protection, labor laws, and so forth) with what has been proposed in AI hard and soft laws, transparency is a basic component to trigger effective governance and thereby avoid opaqueness in the development and adoption of such technology. Even so, the level of information and how it is communicated must always be contextual, considering the relationship at stake. AI could serve for automated stock management or for assessing eligibility for welfare programs, but the risks involved and those who will be impacted are very different. Consequently, different transparency strategies must be taken into account. In this sense, there is already a substantive level of enforcement in which regulators are demanding effective transparency. Such a qualifier is key; otherwise, there is no accountability, since there is no oversight.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “From environmental to food safety concerns, risk management is necessarily a collective effort driven by the level of public scrutiny. Despite some important initiatives by a few companies and governments, our society remains immature in this regard. There is still a huge gap in terms of information asymmetry that is impeding more efficient risk management in the AI field.

We are combining soft and hard laws to foster accountable mechanisms for AI risk management. From UNESCO’s guidelines to President Biden’s executive order and the Brazilian draft AI bill that’s under consideration, these efforts advocate, respectively, for international cooperation to publicly map AI incidents, mandatory algorithmic impact assessments, and establishing an open database cataloging high-risk AIs and how they are evaluated in terms of effective risk mitigation. The most pressing issue is whether we, as a society, are democratically expanding our risk management capabilities. Therefore, it is crucial to qualify what kind of risk management we desire as a society; otherwise, this technology may lead to technocracy.”