Responsible AI / Panelist

Ashley Casovan

Responsible AI Institute

United States

Ashley Casovan is an expert on the intersection of responsible AI governance, international quality standards, and safe technology and data use. She is currently the executive director of the Responsible AI Institute, a nonprofit dedicated to mitigating harm and unintended consequences of AI systems. Previously, Casovan was director of data and digital for the Canadian government. She chairs the World Economic Forum’s Responsible AI Certification Working Group and is a responsible AI adviser to the U.S. Department of Defense in addition to her advisory work with other global organizations.

Voting History

Statement Response
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Agree “In an ideal world, I’d strongly agree. However, if the option existed, I would have said, “It depends.” While I believe that AI practices should align with an organization’s corporate social responsibility efforts, it really depends on how seriously CSR is taken within an organization and whether it is used to help set the organization’s priorities. I have seen many examples where CSR is not integrated strongly into corporate decision-making, so for that reason, I don’t think that responsible AI should be tied to CSR to the exclusion of other practices, like technology architecture and governance practices, regulatory compliance, ongoing monitoring, and communication and training. Responsible AI is not just about raising awareness of the potential harms that can come from an AI system, or a company putting out a statement on how important the issue is. It is a practice akin to other forms of technology and business governance efforts. Ideally, responsible AI is tied to an organization’s established environmental, social, and governance objectives, and regular and transparent reporting against these objectives to a strong CSR function is required.”
Responsible AI should be a part of the top management agenda. Strongly agree “With the increased use of AI throughout all industries, it’s important that companies understand the potential impacts, both positive and adverse, related to the design and use of AI. Having a responsible AI governance program in place that includes corporately adopted principles, policies, and guidelines will ensure that the company, its leadership, and employees are protected from unintended risks and corporate exposure. Additionally, creating training opportunities for the company’s leadership team will help to raise necessary awareness of the potential opportunities and challenges, ensuring that AI is used as a force for good within the organization.”