Responsible AI / Panelist

Ashley Casovan

Responsible AI Institute

United States

Ashley Casovan is an expert on the intersection of responsible AI governance, international quality standards, and safe technology and data use. She is currently the executive director of the Responsible AI Institute, a nonprofit dedicated to mitigating harm and unintended consequences of AI systems. Previously, Casovan was director of data and digital for the Canadian government. She chairs the World Economic Forum’s Responsible AI Certification Working Group and is a responsible AI adviser to the U.S. Department of Defense in addition to her advisory work with other global organizations.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “I don’t think that we can put executives into one category. I would say that this is sometimes a challenge; however, it really depends on the executive, their role, the culture in their organization, their experience with the oversight of other types of technologies, and competing priorities.

The biggest challenge is when a responsible AI executive doesn’t have an understanding of both how AI technologies work, and the social and human rights implications when building and deploying these systems. But also, if they don’t have firsthand experience, are they consulted by those who do?

I’ve experienced both ends of the spectrum. Some executives see RAI as just a technology issue that can be resolved with statistical tests or good-quality data. Similarly, I’ve seen executives who are responsible for leading RAI programs and don’t have oversight of the AI technologies their organization is deploying, leading to generic requirements that leave more questions for AI practitioners than the RAI team has answers for.

The ideal scenario is to have shared responsibility through a comprehensive governance board representing both the business, technologists, policy, legal, and other stakeholders.”
Mature RAI programs minimize AI system failures. Strongly agree “While I strongly agree in theory, it is important to note that in the absence of a globally adopted definition or standard framework for what responsible AI involves, then the answer should really be “It depends.” At the Responsible AI Institute, we have developed a comprehensive framework grounded in the OECD’s AI principles. Our framework evaluates AI systems against bias and fairness, explainability and interpretability, robustness, data and system operations, consumer protection, and accountability.

By working to translate these principles into practical use cases, we look at the responsible use of AI from a technology perspective and a social or ethical perspective, and we look at the context in which the system is being operated, including the inputs used to train and run the model as well as any configuration changes.

If a company’s RAI program takes all of these factors into account and works to support or augment existing technology and business governance processes, then yes, an RAI program would certainly help to identify various points of failure within an AI system, including the data, model, or context in which that system is being used.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Agree “In an ideal world, I’d strongly agree. However, if the option existed, I would have said, “It depends.” While I believe that AI practices should align with an organization’s corporate social responsibility efforts, it really depends on how seriously CSR is taken within an organization and whether it is used to help set the organization’s priorities. I have seen many examples where CSR is not integrated strongly into corporate decision-making, so for that reason, I don’t think that responsible AI should be tied to CSR to the exclusion of other practices, like technology architecture and governance practices, regulatory compliance, ongoing monitoring, and communication and training. Responsible AI is not just about raising awareness of the potential harms that can come from an AI system, or a company putting out a statement on how important the issue is. It is a practice akin to other forms of technology and business governance efforts. Ideally, responsible AI is tied to an organization’s established environmental, social, and governance objectives, and regular and transparent reporting against these objectives to a strong CSR function is required.”
Responsible AI should be a part of the top management agenda. Strongly agree “With the increased use of AI throughout all industries, it’s important that companies understand the potential impacts, both positive and adverse, related to the design and use of AI. Having a responsible AI governance program in place that includes corporately adopted principles, policies, and guidelines will ensure that the company, its leadership, and employees are protected from unintended risks and corporate exposure. Additionally, creating training opportunities for the company’s leadership team will help to raise necessary awareness of the potential opportunities and challenges, ensuring that AI is used as a force for good within the organization.”