Riyanka Roy Choudhury is an experienced legal tech strategist and legal designer who advises startups globally in implementing legal tech in their organizations. She is a CodeX fellow at Stanford Law School’s Computational Law Center, where she is developing legal automation applications to simplify law while also speaking, writing, and building a legal tech community. She also coleads the RegTrax and Machine-Generated Legal Documents projects at Stanford. Choudhury received an award as part of Facebook’s (Meta) Ethics in AI Research Initiative for the Asia Pacific in 2020 and was recognized as one of the 100 Brilliant Women in AI Ethics in 2022.
|Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly agree||
“New generative AI tools, like ChatGPT, Bing Chat, Bard, and GPT-4, are creating original images, sounds, and text by using machine learning algorithms that are trained on large amounts of data. Since the RAI frameworks were not written to deal with the sudden, unimaginable number of risks that generative AI tools are introducing into society, the companies developing these tools need to take responsibility by adopting new AI ethics principles. A new AI governance system would help manage the growth of these tools and mitigate potential risks, right at the beginning of this new era of generative AI.
AI is the most powerful system of this generation to uncover complex data and processes in all the industrial sectors, and this will bring about a knowledge revolution. This revolution will simplify the functioning of existing operating systems. Responsible AI will, by design, help companies implement AI policies right at the core, while they’re building the technologies, which in turn will help control the spread of stereotypes. RAI will also help them apply mindful effort in correctly anticipating the potential repercussions of technological innovations. Building AI ethically and responsibly should be the priority of all of the companies developing, adapting, and using these new generative AI tools.”
|RAI programs effectively address the risks of third-party AI tools. Strongly agree||
“Third-party AI tools are frameworks offered by companies or open-source communities that abstract the core inner workings of an AI model and provide APIs for developers to use and build AI applications aimed at end users.
One of the main principles and focus of RAI programs is to mitigate the risks of integrating third-party AI tools. Since the majority of the end users are using products, services, or applications that have AI at their core, it is the primary responsibility of the RAI programs to indeed take care that this AI is deployed responsibly. Building neural networks is a lengthy process, hence integrating third-party AI tools supports development and also the optimization of networks. RAI principles are built systematically to remove AI bias and to make the products more inclusive, ethical, and accountable and to maintain trust so that the final AI product works well for all of the end users.”
|Executives usually think of RAI as a technology issue. Agree||“Responsible AI is an ongoing process and not a one-time technical issue. There is an obligation and ownership on the executives to create trustworthy AI for their companies. When RAI principles are implemented by top-level management in companies, then executives can set the right tone at the beginning, as engineers will have to ethically embed RAI principles while building and designing the AI technologies. A lot of company executives do consider RAI a strategic and key priority, but it has to be owned by all the executives across industries. Leaders should be provided with the right working knowledge in relation to AI’s development and its use to prevent potential ethical issues. Executives should understand that along with big data, knowledge of RAI principles can help avoid reputational harm for brands and also prevent damage to society, especially when the AI will make ethical judgments in high-stakes AI applications. It’s important to go beyond the written RAI principles and implement them in the real world. Executives need to recognize that moving forward, implementation of RAI principles will create a competitive advantage for the companies with strong AI ethics.”|
|Mature RAI programs minimize AI system failures. Agree||“Companies are investing in and relying on AI to increase efficiency in the system, so they need to understand that AI failures can affect not just individuals but millions of people. Responsible AI (RAI) programs have advanced, reliable policies for the efficient use of AI systems. This gives the users trust and confidence in these systems. However, it is also important to understand that AI algorithms and programmers who are coding have a bigger role to play in mitigating the risks of system failures. A lot of the time, the problems related to AI are also unpredictable since the neural network that is crucial to higher-level pattern recognition and aids decision-making might also break down. Therefore, in the case of accounting for such system failures, mature RAI programs and AI explainability provide a path to detect and prevent such issues in current and future systems as well.”|
|RAI constrains AI-related innovation. Strongly disagree||“Responsible AI creates a guideline to mitigate AI risks. Responsible AI influences innovations to create a paradigm shift in AI algorithms to reduce the effect of replication of human decision-making in its machine learning at the core level. It results in growth and brings in social responsibilities for organizations in AI-related innovations. In my experience, I have seen that conventional AI methods overlook the need for complete information to deal with complexity that responsible AI accounts for in its guidelines. Since it plays a constructive role for AI to reach its full potential, responsible AI frameworks and systematic approaches will help machine learning to reach its full potential in future innovations. Backed by empirical studies, research organizations have been able to identify and describe key areas of responsible AI as a set of fundamental principles that are important when developing AI innovations. I strongly believe that when adopted universally, responsible AI can open up new opportunities leading to fairer and more sustainable AI.”|
|Responsible AI should be a part of the top management agenda. Strongly agree||“Responsible AI is an economic and social imperative. It’s vital that AI is able to explain the decisions it makes, so it is important for companies to include responsible AI as a part of the top management agenda. Having a regular agenda item for leadership meetings will ensure fair AI throughout the supply chain. This will make it easier for the businesses creating or dealing with AI technologies to tackle the unique governance challenges that AI creates — for example, privacy issues, and managing the data and complexity that ML/computational systems can bring in, like bias and discrimination.”|