Responsible AI / Panelist

Oarabile Mudongo

Center for AI and Digital Policy

South Africa

Oarabile Mudongo is an AI policy researcher and a regional lead at the Center for AI and Digital Policy. He formerly worked as a policy researcher with Research ICT Africa, focusing on digital governance, policy, and regulation. Mudongo was a public interest technologist through a Technology Exchange fellowship sponsored by the Ford Foundation and Media Democracy Fund. His work intersects information and communication technology policy, governance, and regulation. He is currently pursuing a masters degree at the University of the Witwatersrand.

Voting History

Statement Response
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “This is a critical question in light of the digital transformation that is reshaping businesses today. With overly increasing digitalization and datafication, artificial intelligence technologies are data dependent, and issues of user privacy and data tracking must be considered. Self-regulation by industry is unlikely to properly protect the public interest when it comes to advanced general-purpose technologies such as artificial intelligence, particularly in the business sector.

Corporations seeking to develop fair and accurate AI systems must prioritize privacy in their investment plans, which is a necessary step toward building more trustworthy AI. Similarly, large technology enterprises interested in acquiring AI exert enormous influence. By establishing corporate social responsibility efforts, these businesses can demonstrate that effectively developing and embedding ethical AI is not just a bonus and that failing to do so may be a significant liability to business operations. ”
Responsible AI should be a part of the top management agenda. Strongly agree “Scaling AI deployment will remain challenging until businesses grasp the critical nature of undergoing a fundamental transformation to become responsible AI-driven organizations. The United Nations Guiding Principles on Business and Human Rights do not address this issue. This is despite the fact that the principles state that businesses should incorporate the findings of their human rights due diligence processes into relevant policies and procedures, with adequate resources and authority provided. Unfortunately, these principles do not include an ethical AI agenda, despite the fact that they should serve as a set of guidelines ”to prevent, address, and remedy human rights abuses committed in business operations.”

Certainly, it stands to reason that businesses should embrace this shift, as their business relationships with clients will be defined by their trust in AI systems. Lofred Madzou and Danny Lange’s 2021 article “A 5-Step Guide to Scale Responsible AI” relates to the chronic problem of AI’s distinct regulatory issues in the corporate sector, illuminating the ongoing tensions associated with the AI humanist approach, which is founded on values and human rights. To address these challenges further, we must review the applicability of present policies, legal systems, commercial due diligence practices, and rights protection measures. While the promise of AI and the ethical questions that surround it are interesting, it is clear that additional effort is required to address these problems in order to transform the world we need and reap the benefits of AI technology in our society. ”