Responsible AI / Panelist

Triveni Gandhi



Triveni Gandhi is a Jill-of-all-trades data scientist, thought leader, and advocate for the responsible use of AI who likes to find simple solutions to complicated problems. As responsible AI lead at Dataiku, she builds and implements custom solutions to support the responsible and safe scaling of artificial intelligence. Previously, Gandhi worked as a data analyst at a large nonprofit dedicated to improving education outcomes in New York City. She has a doctorate in political science from Cornell University.

Voting History

Statement Response
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Agree “In the past year, we have seen a steady increase in the number of frameworks, standards, and suggested policies around AI risk. Organizations are becoming more aware of the need to address those risks, and for larger enterprises, we see this reflected in procedural changes within the AI development life cycle. However, many organizations still struggle to effectively implement nuanced control over the different types of risk — especially when it comes to generative AI. Moving forward, companies will need to work with tools and vendors that can offer flexibility and adaptability as the technology and the associated risk management frameworks evolve.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree “Though the conversation around responsible AI is growing day by day, adoption of and investment in these programs is not as widespread as expected. Among those companies that have taken the time to invest in RAI programs, there is wide variation in how these programs are actually designed and implemented. The lack of cohesive or clear expectations on how to implement or operationalize RAI values makes it difficult for organizations to start investing efficiently. As a result, there is not a consistent approach to implementation across companies that are making these investments.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree “Goals, values, and expectations for the safe and responsible development of AI need to come from a central source in order to provide consistent and clear guidelines across the organization. This ensures that all functions and business units are aligned to a cohesive governance and RAI vision and to any relevant regulations that may exist at the industry or locality level. In addition to providing these guidelines, the management of all AI through a central function provides clear accountability structures and tighter control over AI pipelines.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree “The key concepts in responsible AI — such as trust, privacy, safe deployment, and transparency — can actually mitigate some of the risks of new generative AI tools. The general principles of any RAI program can already address baseline issues in how generative AI is used by businesses and end users, but that requires developers to think of those concerns before sharing a new tool with the public. As the pace of unregulated development increases, existing RAI programs may be unprepared in terms of the specialized tooling needed to understand and mitigate potential harm from new tools.”