Responsible AI / Panelist

Vipin Gopal

Eli Lilly and Company

United States

Vipin Gopal is the chief data and analytics officer at Eli Lilly, where he leads transformational data strategy and execution and the development of advanced analytics and data science solutions. Gopal was previously senior vice president of analytics at Humana and has also led analytics at Cigna Healthcare, United Technologies, and Honeywell. He is the founding chair of the Indianapolis Chief Data Officer Forum and a member of Magellan Health’s advisory board. Gopal has a doctorate from Carnegie Mellon University in engineering and an MBA from the New York University Stern School of Business.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “There is increasing recognition that RAI is a broader business issue rather than a pure tech issue. Many organizations have made that transition or are in the process of doing so. Many others, primarily those that are in the earlier stages of AI maturation, have yet to make this journey.

It is only a matter of time before the vast majority of organizations consider RAI to be a business topic and manage it as such.”
RAI constrains AI-related innovation. Disagree “The scope of meaningful and valuable innovation has never been unbounded, and the same applies to AI-related innovation too.

Responsible AI enables responsible innovation. One can make this argument regardless of which dimension of responsible AI is being looked at. For example, let us consider bias and fairness. It would be hard to make the argument that a biased and unfair AI algorithm powers better innovation compared with the alternative. Similar observations can be made with other dimensions of responsible AI, such as security and reliability. In short, responsible AI is a key enabler to ensure that AI-related innovation is meaningful and something that positively benefits society at large.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “Many dimensions of responsible AI reflect companies’ accountability to themselves and to their stakeholders and hence are fundamentally linked to CSR. For example, ensuring that AI systems are built and deployed to ensure fairness and diversity — fundamental components of the overall responsible AI framework — aligns deeply with CSR objectives. One can make a similar observation on other dimensions of responsible AI, such as privacy, security, and reliability.

Responsible AI is not just a technical topic for technologists to solve. By tying it to CSR, it becomes a broader business and societal issue to address and, if one takes it even further, an opportunity to make a positive impact across the board. Organizations will benefit from recognizing the underlying commonalities between responsible AI and CSR while not making one a subset of the other.”
Responsible AI should be a part of the top management agenda. Agree “With the recent increase in development and deployment of AI solutions, it is important that organizations adopt frameworks and principles to enable responsible AI. Democratization of data and analytics is on the rise in organizations. As this trend evolves, it is critical that adequate controls are also in place for responsible use of data and corresponding algorithm development. Chief data, analytics, or AI officers should lead the charge for the responsible AI framework within their respective companies with appropriate organizational constructs to ensure compliance across the company.”