Responsible AI / Panelist

Belona Sonna

Bel’s AI Initiative; AfroLeadership

Australia

Belona Sonna is a Ph.D. candidate in the Humanising Machine Intelligence program at the Australian National University. She earned a bachelors degree in software engineering and a masters in computer science from the University of Ngaoundere, Cameroon, before joining the African Master in Machine Intelligence scholarship program at the African Institute for Mathematical Sciences in Rwanda. Her current research focuses on explainability and privacy preservation in AI-based solutions for health care applications. Sonna has been named one of 2022’s 100 Brilliant Women in AI Ethics.

Voting History

Statement Response
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree “Customers are the end users of AI products, and their input is needed in one way or another to make AI products work effectively in the real world. What’s more, the trust they place in the product and their feedback is of paramount importance to the company for commercial purposes.

I strongly agree that they need to be aware of the use of AI in the products they use. More to the point, we’re in the era of promoting trustworthy AI. This means that everyone involved in product development, including end users, must at least be aware of what they are using and how their contribution could improve the product. Again, I think that in the situation mentioned, one way for companies to respect the ethical principles of AI would be to be totally transparent with customers.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “The discussion of AI-related risks in organizations is a global trend, but the impact of this conversation varies according to the organization’s socioeconomic environment, including CEO AI literacy, the benefit of using AI to the company’s income, the involvement of the organization’s host government in AI regulation and data protection, and the AI readiness of the country in which the organization is located. The organizations expected to comply with established policies or/and make a profit with AI develop their risk management capabilities to secure their customers. Others are still in the process of making AI work for them. This last point opens the door to two possibilities: It may be an opportunity for them to adopt AI risk mitigation at an early stage, or it may be another reason to fear the use of AI, as the damage that could result is enormous.

Plus, AI systems are among the fastest-growing applications in the world. The associated risks are therefore equally dynamic. Even some organizations keen to maintain high AI risk-handling capabilities can easily miss something in the process. This means that organizations should always be enhancing their capabilities to handle AI-related risks.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Agree “Many factors can help us recognize the efforts of companies investing in responsible AI programs. Recently, most of them have been striving to follow RAI principles as part of the development of their AI processes. However, the main aspect that reinforces my idea of their real investment is to see business leaders seeking knowledge for RAI deployment (for example, the TRAIL [Trustworthy and Responsible AI Learning] certificate program for industry, led by Mila). As we have already pointed out in this series of articles, RAI will be implemented in companies when top management becomes involved in the process.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Strongly disagree “Although centralized management of responsible AI can guarantee that all projects follow the same circuit of control, this can quickly become a disadvantage if the project under examination is complex. Moreover, centralized management runs counter to the vision of responsible AI development, which would preferably involve all players in the development chain. This is why, in my opinion, decentralized management makes it possible to distribute the roles of each unit according to their expertise, to ensure not only positive interaction but also the involvement of all.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly agree “Generative AI tools have the distinction of being able to produce content without direct human intervention, so there is a huge debate about who is responsible for the results, which in most cases are unexpected. In addition, generative AI tools can sometimes be misleading, as they do not have a humanlike understanding of the subject matter. Another concern is intellectual property rights and privacy. Before adopting generative AI tool services, it is important to mitigate these risks.

Overall, I believe that most current RAI programs are not prepared to deal with the risks associated with new generative AI tools.”
RAI programs effectively address the risks of third-party AI tools. Disagree “RAI programs alone cannot effectively address the risks associated with the use or integration of third-party AI tools. This is because RAI programs are built within a framework where a number of principles are required to ensure that the results have only a positive impact for end users. However, the same assurance might not exist for third-party AI tools. Third-party AI tools can cause many problems, including a lack of transparency, and security and privacy issues. Their use without any prior verification phase could then corrupt the primary product.

With this context in mind, it is necessary to elaborate, in addition to the RAI programs, a related risk management system for third-party tools that will serve to ensure perfect cohesion between the two entities according to RAI principles.”
Executives usually think of RAI as a technology issue. Neither agree nor disagree “Executives’ views on RAI are strongly related to their backgrounds. While those with a technical background think that the issue of RAI is about building an efficient and robust model, those with a social background think that it is more or less a way to have a model that is consistent with societal values. The good news is that the requirements of RAI are both technical and social. Hence, the real question for effective RAI in organizations is how to establish an adequate management program that addresses both the technical and social aspects. A suitable answer to this question requires an organization’s executives to be open-minded on the following aspects: the needs of the society, the AI literacy of the human users, the choice of technology tools, and the respect of AI ethics principles during the design of AI solutions.

Overall, RAI should not be considered only a technology issue. Instead, executives, regardless of their backgrounds, should take it as a rights management plan that should be established by taking into account the technology, the reality of society, and the end users exposed to the final AI solution.”
Mature RAI programs minimize AI system failures. Strongly agree “RAI has the ability to enable the design of AI solutions that challenge both technical and societal failures of AI systems. Although some of the principles of AI ethics are difficult to implement at this time, the logic behind the design of RAI programs is to minimize the risk of errors in the designed solutions. Therefore, I strongly believe that mature RAI programs minimize the failures of AI systems as they are meant to build robust solutions but also preserve human dignity anyhow.”
RAI constrains AI-related innovation. Strongly disagree “If we consider an innovation to be a technical or scientific change in a process aimed at improving the use of a service, then AI is undoubtedly the innovative means of our era and of the future. Every day, new models are produced with ever greater predictive capabilities. However, most of them are complex and difficult to explain to the average user, which makes their adoption in society difficult. Of course, it is legitimate that people want to understand why and how decisions affecting their daily lives are made. So is it necessary to create increasingly powerful models if they are not ultimately exploited in our societies?

RAI is the way to couple the computational power of AI with a social dimension necessary to build and keep a trustworthy relationship with end users. Thus, rather than a being constraint, RAI aims to move AI-related innovation from the technical to the social dimension needed to improve people’s lives through bias-free solutions. Of course, considering RAI rules may reduce the accuracy of models slightly, but then again, what really matters? Demonstrating the computational power of models, or putting that computational power to the benefit of humans?”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “Corporate social responsibility efforts consist of four responsibilities: environmental, ethical, philanthropic, and economic. Simply put, social responsibility efforts are about companies maximizing profits while respecting society. Therefore, I am all for organizations combining their responsible AI efforts with their social responsibility efforts, as they both have the same goal. When building an AI model, the goal is to use the power of AI algorithms to make profits; on top of that, responsible AI aims to produce solutions that are ethical, environmentally friendly, trustworthy, and explainable.

Beyond designing AI-based solutions, organizations involved in responsible AI establish a close relationship between solutions and society by incorporating values that place humans at the center of development. In such a context, people are more likely to use, trust, and recommend the products: The organization is accountable and can get a lion’s share of the market. Furthermore, through this relationship, the organization is more aware of society’s needs and concerns and is therefore able to produce solutions that matter to it. This is a key to business success and innovation.”
Responsible AI should be a part of the top management agenda. Strongly agree “Like communication strategy or business strategy, responsible AI is a decision-making strategy that every company’s management team should develop and ensure its implementation within its business. To this end, it is essential that this strategy is present in the daily life of the company through the activities of development. Several reasons can justify this point of view; indeed, if subscribing to responsible AI is a commitment that a company makes that all its AI-based solutions must respect certain standards, then a systematic surveillance must be done during the development process to ensure the feasibility. In addition, the company must frequently update itself on the evolution of its standards, which are very often proposed by external agencies, in order not to face any external censure or prejudice.

Furthermore, the decision to subscribe to responsible AI must be carried out by the management team for its implementation to be effective. As the decision makers of the general policy of the company, it is up to them to assert what should be done by reminding the executors of the expectations and goals of the company by any means.”