Responsible AI / Panelist

Andrew Strait

Ada Lovelace Institute

United Kingdom

Andrew Strait is an associate director at the Ada Lovelace Institute, responsible for its work addressing emerging technologies and industry practices. He has spent the past decade working at the intersection of technology, law, and society. Before joining Ada, he was an ethics and policy researcher at Google DeepMind, where he managed internal AI ethics initiatives and oversaw the company’s network of external partnerships. Previously, he worked as a legal operations specialist at Google, where he developed and implemented platform moderation policies for areas such as data protection, hate speech, terrorist content, and child safety.

Voting History

Statement Response
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Strongly disagree “I disagree with this question for a few reasons.

1. There are no clear and consistent AI risk management practices. Recent research into AI auditing tools, for example, shows that a rise in AI auditing tools still hasn’t led to them being effectively used by organizations to meet accountability goals. These tools fail to enable meaningful action to be taken once an audit has been completed. Other research has shown that auditors experience a range of issues with audits that prevent effective accountability. We are still in an era of testing and trialing different methods — but they are not proven to be effective.

2. Not all organizations are adopting these practices to address AI-related risks. These are still being done ad hoc and mostly by organizations that are well resourced to do this work. The lack of regulatory requirements to adopt risk management practices creates a perverse incentive — in other words, these are a nice-to-have cost.

3. Even for organizations that are adopting these methods, it is unclear whether they are achieving meaningful risk reduction. More transparency and research are needed to determine this.”