Responsible AI / Panelist

Alyssa Lefaivre Škopac

Alberta Machine Intelligence Institute (AMII)

Canada

Alyssa Lefaivre Škopac serves as the Director of AI Trust & Safety at the Alberta Machine Intelligence Institute (Amii), one of Canada’s foremost hubs for AI and machine learning research. In this role, she ensures the ethical and responsible deployment of AI technologies by fostering public trust and advocating for robust AI governance.

Lefaivre Škopac plays a key role in Amii’s contributions to the Canadian Artificial Intelligence Safety Institute (CAISI), a national initiative focused on mitigating the risks of advanced AI systems and promoting trustworthy AI development. She provides strategic leadership and engages with global stakeholders to enhance Amii’s impact on international AI safety efforts.

With over 15 years of experience in partnership development within the emerging tech sector, Lefaivre Škopac is recognized for her ability to build high-impact collaborations. She previously led global partnerships at the Responsible AI Institute and currently serves as a senior policy Adviser at the Institute for Security and Technology (IST), where she advises on AI governance and security policy.

Voting History

Statement Response
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Neither agree nor disagree “General-purpose AI producers can be held accountable — at least in theory. Existing laws on data protection, antidiscrimination, and product liability apply in some cases, and legal challenges (like copyright disputes) are testing these boundaries. But broader accountability remains unclear. Even the EU AI Act isn’t being fully enforced yet, and no major jurisdiction has a comprehensive regulatory framework. The legal system is lagging behind AI’s rapid development, leaving companies operating in a gray area.

That said, the fact that accountability isn’t fully settled means there’s room to shape it in a fair and effective way. Right now, much of the focus is on AI’s risks and harms, but accountability should also recognize where AI is built responsibly and delivering value. The challenge is that incentives are out of sync — AI is treated as a race, not a shared responsibility, making reasonable and fit-for-purpose guardrails harder to implement. Even if we could agree on what companies should be accountable for, whose values? Which standards? Who enforces them? Current geopolitical discourse doesn’t incentivize enforcement. Meaningful accountability will remain ambiguous and an uphill battle.”