Rebecca Finlay is CEO at Partnership on AI (PAI), a global nonprofit that brings together a cross-sector community of over 100 partners in 17 countries to ensure that developments in artificial intelligence advance positive outcomes for people and society. Working at the intersection of technology and society, Finlay has held leadership roles in civil society, research organizations, and industry.
Before joining PAI, she founded global research organization CIFAR’s AI & Society program, one of the first international multi-stakeholder initiatives on the impact of AI in society. Her insights have been featured in books and the media, including the Financial Times, The Guardian, Politico, and Nature Machine Intelligence. She has spoken at venues such as South by Southwest and the U.K. AI Safety Summit.
Finlay is a fellow of the American Association for the Advancement of Sciences and sits on advisory bodies in Canada, France, and the U.S. She holds degrees from McGill University and the University of Cambridge.
Voting History
Statement | Response |
---|---|
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Agree |
“General-purpose AI providers can be held accountable for their efforts to ensure that their models are developed safely, and this responsibility should be understood within the broader AI value chain. Safety is a team sport. Other important players, such as model adapters, hosting services, and application developers, play key roles in reducing potential harms from both open and closed foundation models. What can model providers do? A lot.
Model providers can curate and filter their training data to remove potentially harmful content. They can conduct internal evaluations of their models through red teaming and provide documentation, like model cards, to downstream developers. Currently less efficacious but still important, they can offer safety tools, publish a responsible AI license, and implement digital signatures. Once their model is deployed, model providers can monitor misuses and user feedback, and implement incident reporting and decommissioning policies. All entities in the AI value chain can work together to support informed and responsible use. As in other sectors, collaboration between providers, developers, policy makers, researchers, and civil society is a must-have for safety.” |