Ashley Casovan is an expert on the intersection of responsible AI governance, international quality standards, and safe technology and data use. She is currently the executive director of the Responsible AI Institute, a nonprofit dedicated to mitigating harm and unintended consequences of AI systems. Previously, Casovan was director of data and digital for the Canadian government. She chairs the World Economic Forum’s Responsible AI Certification Working Group and is a responsible AI adviser to the U.S. Department of Defense in addition to her advisory work with other global organizations.
|Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly agree||“Generative AI has helped the world see and experience the potential and power of AI. It has captivated imaginations and made AI more real than ever before. While the profound impact of all AI technologies is yet to be seen, governance and oversight of their operation remains the utmost priority. I strongly agree with this statement, not just because I believe that generative AI tools will be used more ubiquitously than we have seen with other types of AI systems, but because I don’t believe that the majority of organizations have strong RAI programs to deal with any type of AI technologies at present. The application of AI will remain context specific. Therefore, RAI programs can’t be one-size-fits-all, and organizations seeking to leverage generative AI will have to understand the implications of these systems before they adopt them. However, the same basic rules and principles apply: Understand the impact or harm of the systems you are seeking to deploy; identify appropriate mitigation measures, such as the implementation of standards; and set up governance and oversight to continuously monitor these systems throughout their life cycles.”|
|RAI programs effectively address the risks of third-party AI tools. Disagree||
“I would say that due to a lack of maturity in what RAI programming across organizations and industry actually means, there is yet to be an industry best practice or standard for integrating third-party tools.
In organizations where there are greater dependencies on purchasing external AI systems to either augment or replace their own development, there needs to be, first and foremost, an understanding of the risk and liability between the AI developer and deployer. We strongly recommend addressing these risks through internal RAI policies and strong procurement practices that incorporate the challenges of acquiring AI tools, systems, and solutions.
Key things to think about are, what types of evaluation need to be done for AI vendors? What documentation, including contractual agreements, need to be in place between the developer and deployer? What type of documentation, including ongoing monitoring, will be necessary throughout the life cycle of the AI? We strongly encourage these questions to be answered as proactively as possible.”
|Executives usually think of RAI as a technology issue. Neither agree nor disagree||
“I don’t think that we can put executives into one category. I would say that this is sometimes a challenge; however, it really depends on the executive, their role, the culture in their organization, their experience with the oversight of other types of technologies, and competing priorities.
The biggest challenge is when a responsible AI executive doesn’t have an understanding of both how AI technologies work, and the social and human rights implications when building and deploying these systems. But also, if they don’t have firsthand experience, are they consulted by those who do?
I’ve experienced both ends of the spectrum. Some executives see RAI as just a technology issue that can be resolved with statistical tests or good-quality data. Similarly, I’ve seen executives who are responsible for leading RAI programs and don’t have oversight of the AI technologies their organization is deploying, leading to generic requirements that leave more questions for AI practitioners than the RAI team has answers for.
The ideal scenario is to have shared responsibility through a comprehensive governance board representing both the business, technologists, policy, legal, and other stakeholders.”
|Mature RAI programs minimize AI system failures. Strongly agree||
“While I strongly agree in theory, it is important to note that in the absence of a globally adopted definition or standard framework for what responsible AI involves, then the answer should really be “It depends.” At the Responsible AI Institute, we have developed a comprehensive framework grounded in the OECD’s AI principles. Our framework evaluates AI systems against bias and fairness, explainability and interpretability, robustness, data and system operations, consumer protection, and accountability.
By working to translate these principles into practical use cases, we look at the responsible use of AI from a technology perspective and a social or ethical perspective, and we look at the context in which the system is being operated, including the inputs used to train and run the model as well as any configuration changes.
If a company’s RAI program takes all of these factors into account and works to support or augment existing technology and business governance processes, then yes, an RAI program would certainly help to identify various points of failure within an AI system, including the data, model, or context in which that system is being used.”
|Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Agree||“In an ideal world, I’d strongly agree. However, if the option existed, I would have said, “It depends.” While I believe that AI practices should align with an organization’s corporate social responsibility efforts, it really depends on how seriously CSR is taken within an organization and whether it is used to help set the organization’s priorities. I have seen many examples where CSR is not integrated strongly into corporate decision-making, so for that reason, I don’t think that responsible AI should be tied to CSR to the exclusion of other practices, like technology architecture and governance practices, regulatory compliance, ongoing monitoring, and communication and training. Responsible AI is not just about raising awareness of the potential harms that can come from an AI system, or a company putting out a statement on how important the issue is. It is a practice akin to other forms of technology and business governance efforts. Ideally, responsible AI is tied to an organization’s established environmental, social, and governance objectives, and regular and transparent reporting against these objectives to a strong CSR function is required.”|
|Responsible AI should be a part of the top management agenda. Strongly agree||“With the increased use of AI throughout all industries, it’s important that companies understand the potential impacts, both positive and adverse, related to the design and use of AI. Having a responsible AI governance program in place that includes corporately adopted principles, policies, and guidelines will ensure that the company, its leadership, and employees are protected from unintended risks and corporate exposure. Additionally, creating training opportunities for the company’s leadership team will help to raise necessary awareness of the potential opportunities and challenges, ensuring that AI is used as a force for good within the organization.”|