Responsible AI / Panelist

Francesca Rossi

IBM

United States

Francesca Rossi is an IBM fellow at the T.J. Watson Research Center and the IBM AI Ethics global leader. In that role, she leads research projects to advance AI capabilities and cochairs the IBM AI Ethics board. Before joining IBM, she was a professor of computer science at the University of Padova, Italy. Rossi is a member of the board of directors of the Partnership on AI and on the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI researchers (AAAI) and the European association (EurAI) and will serve as AAAI president in 2022-2024.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Disagree “While this may have been true until few years ago, now most executives understand that RAI means addressing sociotechnological issues that require sociotechnological solutions. These solutions need to be people-centered and include guidelines, education, training, processes, risk assessment frameworks, governance bodies, team diversity, and also technology. In two recent IBM studies on AI ethics (in 2018 and 2021), we saw a drastic increase in the percentage of companies that understand that RAI is a CEO- and board-level topic, not just a responsibility of the technical leaders in the company.”
Mature RAI programs minimize AI system failures. Strongly agree “AI systems fail when they are not deployed and used in a human-aligned way. Mature responsible AI programs, which include effective risk and impact assessment and mitigation, help minimize failure.”
RAI constrains AI-related innovation. Agree “Responsible AI development, deployment, and use allow all AI stakeholders to focus on beneficial innovation that is aligned to human values and supports human progress. Rather than constraints, RAI provides proactive guidelines, direction, and purpose to advancing technology. This is the kind of AI-related innovation that we need to support and facilitate.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree “It is good to tie responsible AI efforts to corporate social responsibility, but responsible AI goes beyond that and also includes AI principles, tools, processes, governance, guidelines, education, and risk assessment. These efforts require a specific environment to be defined and operationalized, in coordination with all other related activities within a company.”
Responsible AI should be a part of the top management agenda. Strongly agree “Any company building, using, or deploying AI should make sure that AI systems are not only accurate but also trustworthy (fair, robust, explainable, etc.). To achieve this, a responsible AI strategy needs to be defined and be used to support the whole AI ecosystem in the entire company. This strategy needs more than technical tools but also includes governance frameworks, educational material, guidelines, best practices, methodologies, and culture change. People, and not technology, should be at the center of the strategy, and not just the AI developers but everybody in their different roles.”