Triveni Gandhi is a Jill-of-all-trades data scientist, thought leader, and advocate for the responsible use of AI who likes to find simple solutions to complicated problems. As responsible AI lead at Dataiku, she builds and implements custom solutions to support the responsible and safe scaling of artificial intelligence. Previously, Gandhi worked as a data analyst at a large nonprofit dedicated to improving education outcomes in New York City. She has a doctorate in political science from Cornell University.
Voting History
Statement | Response |
---|---|
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree | “Disclosing the use of AI to customers is a cornerstone of transparency in an ever-evolving landscape. While the term AI can have many different meanings, organizations should let customers know when predictive modeling, generative AI, or an autonomous AI agent is impacting their ability to access goods and services. This disclosure should be evident to the customer who will be affected by the use of AI and provide a method for recourse if the customer feels the outcome is not accurate or unbiased.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree | “While doing so is a daunting task, more and more organizations are putting the right structures, processes, and tooling into place to support adherence to the upcoming regulation. One key aspect of preparation is the ability to bring diverse teams to the table to enable both technical and nontechnical teams to design, develop, and deploy AI in new ways. By involving stakeholders from IT, data/analytics, compliance, and the business early on, organizations will create new ways of working and effecting change management practices. However, this work requires alignment and strategic focus that some organizations are lacking, so garnering executive buy-in will be important for those organizations that have not started these discussions yet.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Agree | “In the past year, we have seen a steady increase in the number of frameworks, standards, and suggested policies around AI risk. Organizations are becoming more aware of the need to address those risks, and for larger enterprises, we see this reflected in procedural changes within the AI development life cycle. However, many organizations still struggle to effectively implement nuanced control over the different types of risk — especially when it comes to generative AI. Moving forward, companies will need to work with tools and vendors that can offer flexibility and adaptability as the technology and the associated risk management frameworks evolve.” |
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree | “Though the conversation around responsible AI is growing day by day, adoption of and investment in these programs is not as widespread as expected. Among those companies that have taken the time to invest in RAI programs, there is wide variation in how these programs are actually designed and implemented. The lack of cohesive or clear expectations on how to implement or operationalize RAI values makes it difficult for organizations to start investing efficiently. As a result, there is not a consistent approach to implementation across companies that are making these investments.” |
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Agree | “Goals, values, and expectations for the safe and responsible development of AI need to come from a central source in order to provide consistent and clear guidelines across the organization. This ensures that all functions and business units are aligned to a cohesive governance and RAI vision and to any relevant regulations that may exist at the industry or locality level. In addition to providing these guidelines, the management of all AI through a central function provides clear accountability structures and tighter control over AI pipelines.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Neither agree nor disagree | “The key concepts in responsible AI — such as trust, privacy, safe deployment, and transparency — can actually mitigate some of the risks of new generative AI tools. The general principles of any RAI program can already address baseline issues in how generative AI is used by businesses and end users, but that requires developers to think of those concerns before sharing a new tool with the public. As the pace of unregulated development increases, existing RAI programs may be unprepared in terms of the specialized tooling needed to understand and mitigate potential harm from new tools.” |