Bruno Bioni is the founding director of Data Privacy Brasil, a research and education organization focused on privacy and data protection issues. He is also colead chair of the Think Tank 20, a G20 task force focused on inclusive digital transformation. He is a full member of the Brazilian National Data Protection Council and the Advisory Group on AI, Data Protection and Misinformation of the Brazilian Electoral Superior Court. In 2022-23, he was on a federal senate commission of jurists responsible for proposing AI regulations for Brazil. He holds a doctorate in commercial law and a master’s degree in civil law from the University of São Paulo Law School.
Voting History
Statement | Response |
---|---|
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree | “As a rule, transparency must be seen as a key and transversal obligation, regardless of the level of AI risk. By combining already existing laws (e.g., data protection, consumer protection, labor laws, and so forth) with what has been proposed in AI hard and soft laws, transparency is a basic component to trigger effective governance and thereby avoid opaqueness in the development and adoption of such technology. Even so, the level of information and how it is communicated must always be contextual, considering the relationship at stake. AI could serve for automated stock management or for assessing eligibility for welfare programs, but the risks involved and those who will be impacted are very different. Consequently, different transparency strategies must be taken into account. In this sense, there is already a substantive level of enforcement in which regulators are demanding effective transparency. Such a qualifier is key; otherwise, there is no accountability, since there is no oversight.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“From environmental to food safety concerns, risk management is necessarily a collective effort driven by the level of public scrutiny. Despite some important initiatives by a few companies and governments, our society remains immature in this regard. There is still a huge gap in terms of information asymmetry that is impeding more efficient risk management in the AI field.
We are combining soft and hard laws to foster accountable mechanisms for AI risk management. From UNESCO’s guidelines to President Biden’s executive order and the Brazilian draft AI bill that’s under consideration, these efforts advocate, respectively, for international cooperation to publicly map AI incidents, mandatory algorithmic impact assessments, and establishing an open database cataloging high-risk AIs and how they are evaluated in terms of effective risk mitigation. The most pressing issue is whether we, as a society, are democratically expanding our risk management capabilities. Therefore, it is crucial to qualify what kind of risk management we desire as a society; otherwise, this technology may lead to technocracy.” |