Triveni Gandhi is a Jill-of-all-trades data scientist, thought leader, and advocate for the responsible use of AI who likes to find simple solutions to complicated problems. As responsible AI lead at Dataiku, she builds and implements custom solutions to support the responsible and safe scaling of artificial intelligence. Previously, Gandhi worked as a data analyst at a large nonprofit dedicated to improving education outcomes in New York City. She has a doctorate in political science from Cornell University.
|The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree||“Goals, values, and expectations for the safe and responsible development of AI need to come from a central source in order to provide consistent and clear guidelines across the organization. This ensures that all functions and business units are aligned to a cohesive governance and RAI vision and to any relevant regulations that may exist at the industry or locality level. In addition to providing these guidelines, the management of all AI through a central function provides clear accountability structures and tighter control over AI pipelines.”|
|Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree||“The key concepts in responsible AI — such as trust, privacy, safe deployment, and transparency — can actually mitigate some of the risks of new generative AI tools. The general principles of any RAI program can already address baseline issues in how generative AI is used by businesses and end users, but that requires developers to think of those concerns before sharing a new tool with the public. As the pace of unregulated development increases, existing RAI programs may be unprepared in terms of the specialized tooling needed to understand and mitigate potential harm from new tools.”|