Responsible AI / Panelist

Oarabile Mudongo

Center for AI and Digital Policy

South Africa

Oarabile Mudongo is an AI policy researcher and a regional lead at the Center for AI and Digital Policy. He formerly worked as a policy researcher with Research ICT Africa, focusing on digital governance, policy, and regulation. Mudongo was a public interest technologist through a Technology Exchange fellowship sponsored by the Ford Foundation and Media Democracy Fund. His work intersects information and communication technology policy, governance, and regulation. He is currently pursuing a masters degree at the University of the Witwatersrand.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “C-suite attitudes about AI and its application are changing. Investing in AI development has always been considered a cost of entry to do business in the digital age, rather than a long-term profit-making investment. This analogy, however, has evolved over time, since AI technology is increasingly regarded as a business driver crucial to an organization’s capacity to perform key responsibilities.

Responsible AI is becoming increasingly important in current commercial decision-making processes, but arguably many executives are still struggling to quantify the ROI to justify investments in RAI. Realizing the full potential of RAI demands a transformation in organizational thinking. By viewing responsible AI as a business driver rather than overhead, companies may perceive AI as a valuable asset that assists business executives in making more informed decisions and delivering new revenue from responsible AI ethics and principles to the bottom line.”
Mature RAI programs minimize AI system failures. Agree “Despite the increasing adoption of AI by businesses and consumers, many companies are still in the early phases of their responsible AI programs. RAI is intended for and developed within the premise of ethical, secure, open, and accountable AI technology use in accordance with fair human, social, and ecosystem values. One of the major challenges for many companies today is the failure to achieve true AI development at scale due to a lack of responsible AI systems and the enterprisewide adoption of RAI policy frameworks.

Lately, companies are becoming more involved in shaping AI-related legislation and engaging with regulators at the country level. Regulators are taking notice as well, lobbying for AI regulatory frameworks that include enhanced data safeguards, governance, and accountability mechanisms. Mature RAI programs and regulations based on these standards not only assure the safe, resilient, and ethical use of AI but also help minimize AI system failure.”
RAI constrains AI-related innovation. Neither agree nor disagree “Practically, this is indistinguishable! Perhaps we should begin by examining the sociohistorical context of technological innovation in relation to the Industrial Revolution. The Fourth Industrial Revolution suggests that AI developments will cause far-reaching disruptions in society, with uncertain socioeconomic consequences.

With this context in mind, responsible AI in practice is critical yet difficult to factor into strategic priorities; however, because RAI is still in its embryonic phase, many organizations lack a systematic internal plan to implement its principles. This demonstrates how companies often overlook the technological complexity of the human and process adjustments required.

Companies are required to solve these fundamental issues in order to balance future AI innovation and apply responsible AI guidelines, which include incorporating ethical principles and analytical concepts such as algorithmic fairness into realistic, quantifiable metrics and baselines.

The value of RAI is undeniable; nevertheless, it may as well constrain AI-related innovation affecting revenue models. This might also expose companies to financial, legal, and reputational damage.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “This is a critical question in light of the digital transformation that is reshaping businesses today. With overly increasing digitalization and datafication, artificial intelligence technologies are data dependent, and issues of user privacy and data tracking must be considered. Self-regulation by industry is unlikely to properly protect the public interest when it comes to advanced general-purpose technologies such as artificial intelligence, particularly in the business sector.

Corporations seeking to develop fair and accurate AI systems must prioritize privacy in their investment plans, which is a necessary step toward building more trustworthy AI. Similarly, large technology enterprises interested in acquiring AI exert enormous influence. By establishing corporate social responsibility efforts, these businesses can demonstrate that effectively developing and embedding ethical AI is not just a bonus and that failing to do so may be a significant liability to business operations.”
Responsible AI should be a part of the top management agenda. Strongly agree “Scaling AI deployment will remain challenging until businesses grasp the critical nature of undergoing a fundamental transformation to become responsible AI-driven organizations. The United Nations Guiding Principles on Business and Human Rights do not address this issue. This is despite the fact that the principles state that businesses should incorporate the findings of their human rights due diligence processes into relevant policies and procedures, with adequate resources and authority provided. Unfortunately, these principles do not include an ethical AI agenda, despite the fact that they should serve as a set of guidelines ”to prevent, address, and remedy human rights abuses committed in business operations.”

Certainly, it stands to reason that businesses should embrace this shift, as their business relationships with clients will be defined by their trust in AI systems. Lofred Madzou and Danny Lange’s 2021 article “A 5-Step Guide to Scale Responsible AI” relates to the chronic problem of AI’s distinct regulatory issues in the corporate sector, illuminating the ongoing tensions associated with the AI humanist approach, which is founded on values and human rights. To address these challenges further, we must review the applicability of present policies, legal systems, commercial due diligence practices, and rights protection measures. While the promise of AI and the ethical questions that surround it are interesting, it is clear that additional effort is required to address these problems in order to transform the world we need and reap the benefits of AI technology in our society.”