Responsible AI / Panelist

Damini Satija

Amnesty International

United States

Damini Satija is a technology, human rights and public policy expert. She is the interim director of Amnesty Tech, the global human rights movement’s technology and human rights program,which she originally joined to set up the Algorithmic Accountability Lab (an interdisciplinary unit investigating the impact of artificial intelligence technologies on human rights). Amnesty Tech works across a range of areas, most notably spyware and cyberattacks, surveillance, state use of AI and automation, big tech and social media accountability and children and young people’s rights in digital environments. Prior to her time at Amnesty International, Satija worked in a number of tech policy roles. She was most recently senior policy adviser in the Center for Date Ethics & Innovation, the UK government’s independent expert body on data and AI policy and the UK’s policy expert at the Council of Europe’s committee on artificial intelligence and human rights.

Voting History

Statement Response
General-purpose AI producers (e.g., companies like DeepSeek, OpenAI, Anthropic) can be held accountable for how their products are developed. Strongly disagree “In the current context, given no robust regulatory frameworks for the development of AI, including general-purpose AI, mechanisms for accountability are limited. While there are other legal frameworks in place, such as for data protection, consumer safety, and antitrust/competition policy, these are, by themselves, inadequate for ensuring that human rights, in particular, are respected, protected, and promoted throughout the life cycle of AI development. Accountability also relies on strong transparency standards, which are currently lacking.

The global discourse on AI governance is heavily weighted toward these producers self-governing through nonbinding principles on ethical or responsible use of AI. These can be rolled back with zero accountability, as we saw earlier in 2025, when Google lifted its internal ban on using AI for the development of weapons. In other industries where technology can have serious impacts on individual and community welfare or rights (food, pharmaceuticals, aviation), regulation and governance are not left to private-sector actors. AI should be no exception to this, nor should accountability rest on the worst-case scenario materializing, usually at the expense of marginalized groups.”