
Elizabeth Anne Watkins is a research scientist in the Social Science of Artificial Intelligence at Intel Labs and a member of Intel’s Responsible AI Advisory Council, where she applies social science methods to amplify human potential in human-AI collaboration. Her research on the design, deployment, and governance of AI tools has been published in leading academic journals and has been featured in Wired, MIT Technology Review, and Harvard Business Review. She was previously a postdoctoral fellow at Princeton and has a doctorate from Columbia University and a master’s degree from MIT.
Voting History
Statement | Response |
---|---|
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree | “Although it’s difficult to generalize across all companies, there is always room for improvement in RAI practices, so I somewhat disagree that the business community is adequately invested. With the recent advent of generative AI, the potential benefits of these systems will grow, but it will take robust RAI programs to build systems that reduce their possible risks in order to truly amplify human potential. While we’ve seen a number of our industry peers also taking meaningful steps in the right direction, this is an effort that will take the industry as a whole to truly move the needle.” |
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree | “All organizations function slightly differently, and what works for one might not work for another. At Intel, our centralized, multidisciplinary Responsible AI Advisory Council is responsible for conducting a rigorous review throughout the life cycle of an AI project. The goal is to assess potential ethical risks within AI projects and mitigate those risks as early as possible. Members of our RAI Council provide training, feedback, and support to the development teams and business units to ensure consistency and compliance with our principles across Intel. To foster durable RAI cultures, it’s also helpful to complement this central team with a strong network of champions who can advocate for RAI principles within teams and business units.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree |
“While generative AI tools are exciting systems that can make us more productive, they also raise concerns about impacts on the workforce and professions, toxicity, and bias, as well as concerns about labor sourcing for data labeling, and resource demands. Transparency and explainability — that is, ensuring that stakeholders understand how a system has been built, how it generates outputs, and how their inputs lead to outputs — have been shown to be top concerns for generative AI systems. We cannot trust generative AI results without understanding the processes by which these systems work. As generative AI evolves, it is critical that humans remain at the center of this work and that organizations support the humans doing this work.
Responsible AI begins with the design and development of systems. Organizational leadership must build robust infrastructure for both anticipating and addressing the risks of AI tools; bringing together multiple perspectives, backgrounds, and areas of expertise into spaces of shared deliberation; and ensuring close collaboration with development teams throughout the AI system development life cycle. Only then will we be equipped to build systems that can truly support and amplify human potential.” |