Oarabile Mudongo is policy specialist at the African Observatory on Responsible AI (AORAI). He formerly worked as a policy researcher and regional lead at the Center for AI and Digital Policy, as well as working as a researcher with Research ICT Africa, focusing on digital governance, policy, and regulation. Mudongo was a public interest technologist through a Technology Exchange fellowship sponsored by the Ford Foundation and Media Democracy Fund. His work intersects information and communication technology policy, governance, and regulation. He is currently pursuing a masters degree at the University of the Witwatersrand.
Voting History
Statement | Response |
---|---|
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree |
“The business world is increasingly recognizing the risks tied to artificial intelligence, such as bias, job displacement, and privacy breaches. Yet companies seem to be falling short in investing adequately in responsible AI initiatives to address these concerns.
A recent World Economic Forum survey found that only 77% of companies see AI as a key strategic asset. Despite differing views on AI regulations, many businesses struggle with data-sharing rules between jurisdictions and the uncertainty they bring. This suggests a widespread uncertainty about effective approaches to AI risks. Several reasons contribute to the lack of investment in RAI programs. AI’s novelty creates uncertainty about its benefits and drawbacks. Additionally, AI development can be costly. Some companies might also underestimate the extent of AI risks. It’s crucial for companies to prioritize RAI investments. This ensures the ethical and safe use of AI, preventing harm to society. With collective efforts, AI can be harnessed effectively while minimizing its potential downsides.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree |
“It is possible that some RAI programs may be unprepared to address the risks of new generative AI tools. The risks associated with generative AI tools may differ from those associated with other AI tools, such as prebuilt machine learning models or data analysis tools. For example, generative AI tools may produce biased or harmful content, violate copyright laws, or raise ethical concerns related to the creation of realistic but fake content. RAI programs that have not specifically addressed these risks may be unprepared to mitigate them.
However, it is also possible that some RAI programs have already taken into account the risks of generative AI tools and have established policies and procedures to address them. As with any AI tool, it is crucial to thoroughly assess and monitor the risks associated with generative AI tools and to adapt RAI programs accordingly. As new AI technologies continue to emerge and evolve, it is important for RAI programs to stay up to date and proactively manage risks to ensure responsible AI use.” |
RAI programs effectively address the risks of third-party AI tools. Agree |
“RAI programs can be effective in addressing the risks of using or integrating third-party AI tools. These programs can provide a framework for identifying, assessing, and mitigating risks, as well as establishing clear guidelines for the responsible use of AI. Third-party AI tools refer to AI solutions or services that are created by a company or developer outside of the organization that is using or integrating them.
To effectively address the risks associated with third-party AI tools, RAI programs should include a comprehensive set of policies and procedures, such as guidelines for ethical AI development, risk assessment frameworks, and monitoring and auditing protocols. By carefully vetting third-party providers and ensuring that they adhere to ethical standards and best practices in AI development, organizations can reduce the risk of using unreliable or unethical AI solutions. A proactive approach to managing the risks of third-party AI tools could include ongoing monitoring of AI solutions and regular updates to RAI programs as new risks emerge.” |
Executives usually think of RAI as a technology issue. Neither agree nor disagree |
“C-suite attitudes about AI and its application are changing. Investing in AI development has always been considered a cost of entry to do business in the digital age, rather than a long-term profit-making investment. This analogy, however, has evolved over time, since AI technology is increasingly regarded as a business driver crucial to an organization’s capacity to perform key responsibilities.
Responsible AI is becoming increasingly important in current commercial decision-making processes, but arguably many executives are still struggling to quantify the ROI to justify investments in RAI. Realizing the full potential of RAI demands a transformation in organizational thinking. By viewing responsible AI as a business driver rather than overhead, companies may perceive AI as a valuable asset that assists business executives in making more informed decisions and delivering new revenue from responsible AI ethics and principles to the bottom line.” |
Mature RAI programs minimize AI system failures. Agree |
“Despite the increasing adoption of AI by businesses and consumers, many companies are still in the early phases of their responsible AI programs. RAI is intended for and developed within the premise of ethical, secure, open, and accountable AI technology use in accordance with fair human, social, and ecosystem values. One of the major challenges for many companies today is the failure to achieve true AI development at scale due to a lack of responsible AI systems and the enterprisewide adoption of RAI policy frameworks.
Lately, companies are becoming more involved in shaping AI-related legislation and engaging with regulators at the country level. Regulators are taking notice as well, lobbying for AI regulatory frameworks that include enhanced data safeguards, governance, and accountability mechanisms. Mature RAI programs and regulations based on these standards not only assure the safe, resilient, and ethical use of AI but also help minimize AI system failure.” |
RAI constrains AI-related innovation. Neither agree nor disagree |
“Practically, this is indistinguishable! Perhaps we should begin by examining the sociohistorical context of technological innovation in relation to the Industrial Revolution. The Fourth Industrial Revolution suggests that AI developments will cause far-reaching disruptions in society, with uncertain socioeconomic consequences.
With this context in mind, responsible AI in practice is critical yet difficult to factor into strategic priorities; however, because RAI is still in its embryonic phase, many organizations lack a systematic internal plan to implement its principles. This demonstrates how companies often overlook the technological complexity of the human and process adjustments required. Companies are required to solve these fundamental issues in order to balance future AI innovation and apply responsible AI guidelines, which include incorporating ethical principles and analytical concepts such as algorithmic fairness into realistic, quantifiable metrics and baselines. The value of RAI is undeniable; nevertheless, it may as well constrain AI-related innovation affecting revenue models. This might also expose companies to financial, legal, and reputational damage.” |
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree |
“This is a critical question in light of the digital transformation that is reshaping businesses today. With overly increasing digitalization and datafication, artificial intelligence technologies are data dependent, and issues of user privacy and data tracking must be considered. Self-regulation by industry is unlikely to properly protect the public interest when it comes to advanced general-purpose technologies such as artificial intelligence, particularly in the business sector.
Corporations seeking to develop fair and accurate AI systems must prioritize privacy in their investment plans, which is a necessary step toward building more trustworthy AI. Similarly, large technology enterprises interested in acquiring AI exert enormous influence. By establishing corporate social responsibility efforts, these businesses can demonstrate that effectively developing and embedding ethical AI is not just a bonus and that failing to do so may be a significant liability to business operations.” |
Responsible AI should be a part of the top management agenda. Strongly agree |
“Scaling AI deployment will remain challenging until businesses grasp the critical nature of undergoing a fundamental transformation to become responsible AI-driven organizations. The United Nations Guiding Principles on Business and Human Rights do not address this issue. This is despite the fact that the principles state that businesses should incorporate the findings of their human rights due diligence processes into relevant policies and procedures, with adequate resources and authority provided. Unfortunately, these principles do not include an ethical AI agenda, despite the fact that they should serve as a set of guidelines ”to prevent, address, and remedy human rights abuses committed in business operations.”
Certainly, it stands to reason that businesses should embrace this shift, as their business relationships with clients will be defined by their trust in AI systems. Lofred Madzou and Danny Lange’s 2021 article “A 5-Step Guide to Scale Responsible AI” relates to the chronic problem of AI’s distinct regulatory issues in the corporate sector, illuminating the ongoing tensions associated with the AI humanist approach, which is founded on values and human rights. To address these challenges further, we must review the applicability of present policies, legal systems, commercial due diligence practices, and rights protection measures. While the promise of AI and the ethical questions that surround it are interesting, it is clear that additional effort is required to address these problems in order to transform the world we need and reap the benefits of AI technology in our society.” |