Responsible AI / Panelist

Elizabeth Anne Watkins

Intel Labs

U.S.

Elizabeth Anne Watkins is a research scientist in the Social Science of Artificial Intelligence at Intel Labs and a member of Intel’s Responsible AI Advisory Council, where she applies social science methods to amplify human potential in human-AI collaboration. Her research on the design, deployment, and governance of AI tools has been published in leading academic journals and has been featured in Wired, MIT Technology Review, and Harvard Business Review. She was previously a postdoctoral fellow at Princeton and has a doctorate from Columbia University and a master’s degree from MIT.

Voting History

Statement Response
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree “While generative AI tools are exciting systems that can make us more productive, they also raise concerns about impacts on the workforce and professions, toxicity, and bias, as well as concerns about labor sourcing for data labeling, and resource demands. Transparency and explainability — that is, ensuring that stakeholders understand how a system has been built, how it generates outputs, and how their inputs lead to outputs — have been shown to be top concerns for generative AI systems. We cannot trust generative AI results without understanding the processes by which these systems work. As generative AI evolves, it is critical that humans remain at the center of this work and that organizations support the humans doing this work.

Responsible AI begins with the design and development of systems. Organizational leadership must build robust infrastructure for both anticipating and addressing the risks of AI tools; bringing together multiple perspectives, backgrounds, and areas of expertise into spaces of shared deliberation; and ensuring close collaboration with development teams throughout the AI system development life cycle. Only then will we be equipped to build systems that can truly support and amplify human potential.”