Philip Dawson is a lawyer and public policy adviser specializing in the governance of digital technologies and AI. Currently serving as head of AI policy at Armilla AI, he has held senior roles at a United Nations agency, in government, and at an AI software firm and has worked with international organizations, academic research institutes, nonprofits, and private companies as an independent adviser on a range of AI policy issues. Dawson is a leader in national data and AI standards efforts in Canada and a member of the U.N. Global Pulse Expert Group on the Governance of Data and AI. He has degrees from the London School of Economics and McGill.
|Most RAI programs are unprepared to address the risks of new generative AI tools. Agree||
“RAI programs that require the implementation of AI risk management frameworks are well positioned to help companies manage the risks associated with new generative AI tools from a process standpoint. However, two blind spots exist.
First, most enterprises have yet to adapt their third-party risk management (TPRM) programs to the AI context and do not subject AI vendors or their products to risk assessments. Accordingly, enterprises are largely blind to the risks they are taking on when procuring third-party AI applications. In the context of generative AI, these risks include bias amplification, misogynistic language, vulnerability to adversarial attacks, or poor overall performance, with significant risk of legal and reputational damages.
Second, companies often lack adequate technical expertise or resources to assess the alignment and customization of applications based on large language models to the specific use case, context, and stakeholders affected. Given the range of potential inputs, this type of assessment requires new tools and methods that can help evaluate the reliability of the generative AI application in a large number of real-world scenarios.
Adapting TPRM to require independent, tech-based risk assessments of generative AI products can help close critical gaps in RAI programs.”
|RAI programs effectively address the risks of third-party AI tools. Agree||“From what I have seen, RAI programs today do help enterprises assess and manage the risks associated with third-party AI tools, but often only after they have been integrated and deployed — for instance, through a combination of internal policies, procedures, and technical controls. An important gap remains at the procurement phase, however, where, whether due to resource and expertise constraints or the absence of enabling RAI policies, many companies simply do not conduct tailored assessments of the third-party AI solutions they are buying — including the models themselves. In general, this means that companies today do not adequately measure the quality and reliability of externally sourced AI products, or the risk and liabilities they are taking when procuring third-party AI. As AI innovation accelerates — particularly with the adoption of complex models like large language models and generative AI, which remain challenging to evaluate from a technical standpoint — the need for RAI programs to develop robust third-party risk procurement policies that include AI model assessments is more critical than ever.”|
|Mature RAI programs minimize AI system failures. Neither agree nor disagree||“RAI programs have the potential to minimize AI system failures. While organizations have begun investing in RAI policies, procedures, and training, a large gap persists around the testing frameworks and tools needed to provide deeper insights into model quality, performance, and risk. A superficial approach to AI testing has meant that a large proportion of today’s AI projects either fail in development or risk contributing to real-world harms after they are released. To reduce failures, mature RAI programs must take a comprehensive approach to AI testing and validation.”|
|RAI constrains AI-related innovation. Strongly disagree||“Probably the greatest evidence we have that RAI does not stifle but rather unlocks innovation is the emergence of an increasingly large and expanding market of RAI SaaS providers developing everything from AI-enhanced de-identification tools to data quality solutions, automated quality assurance platforms, model testing and validation toolkits, and continuous monitoring tools — all of which will be leveraged to help operationalize compliance and emerging certification programs. RAI is accelerating time to market for organizations seeking to realize the benefits of AI for their businesses, and, as a result, it has led to the innovation of an entire new industry to support this demand.”|
|Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree||
“Organizations may wish to tie their responsible AI efforts to their corporate social responsibility efforts, but this is a secondary priority. Establishing RAI policies and practices should be understood first and foremost as a proactive response to emerging legal, governance, and technical standards and an authentic expression of corporate values. Without this critical step, embedding RAI principles or themes into CSR programming will lack legitimacy, and in time it may ultimately undermine an organization's credibility.
Organizations that take a holistic approach to generalizing RAI across their operations, however, including adopting related CSR programs or meeting recognized ESG standards, have a better chance of getting off on the right foot. In this context, applying RAI to traditional CSR efforts can help translate commitments to shared principles of equity, fairness, and inclusion into broader-based social and environmental impact — which investors, boards, employees, business partners, clients, and consumers alike measure with increasing levels of scrutiny.”
|Responsible AI should be a part of the top management agenda. Strongly agree||
“Achieving responsible AI in practice requires translating emerging legal obligations and ethical principles into corporate policies and guidelines that engage a cross-functional team of legal, ethics, policy, risk and compliance, data science, and research professionals. In many cases, the effort will involve significant investment into new resources, such as personnel with sociotechnical expertise, or the procurement of new technical tools — for instance, to monitor the quality and performance of AI systems — or obtaining industry certifications. In short, implementing responsible AI demands significant organizational change and strategic direction.
As such, top management seeking to realize the long-term opportunity of artificial intelligence for their organizations will benefit from a holistic corporate strategy under its direct and regular supervision. Failure to do so will result in a patchwork of initiatives and expenditures, longer time to production, damages that could have been prevented, reputational damages, and, ultimately, opportunity costs in an increasingly competitive marketplace that views responsible AI as both a critical enabler and an expression of corporate values.”