Responsible AI / Panelist

Rashmi Gopinath

B Capital Group

United States

Rashmi Gopinath is a general partner at B Capital Group, where she leads the fund’s global enterprise software practice across cloud infrastructure, cybersecurity, DevOps, and AI/machine learning. She was previously a managing director at M12, Microsoft’s venture fund, where she led investments in enterprise software and sat on several boards, including Synack, Innovaccer, Contrast Security, Frame, UnravelData, and Incorta. Before joining M12, Gopinath was an investment director with Intel Capital and held operating roles at high-growth startups, such as BlueData and Couchbase.

Voting History

Statement Response
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “There are a number of AI-related risks due to the lack of model explainability, lack of visibility into underlying training data, and a lack of transparency in the software supply chain. Moreover, regulatory changes will add additional compliance risks for enterprises leveraging AI decisioning models that touch consumer use cases. Today, most AI budgets are applied toward experimenting with large language models and talent acquisition versus investing in data privacy, security, and risk management solutions. It is highly likely that enterprises will be held responsible by legal courts for any damages caused to consumers due to errors or risks in underlying AI systems. A case in point was the recent Air Canada incident, where the tribunal courts ruled in favor of the airline customer for a faulty AI chatbot system that had applied a nonstandard discount that Air Canada had not honored, claiming that its AI system had mistakenly offered the discount. There will be higher penalties associated with consumer damages related to biases and ethical decisions made by AI systems.”
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Agree “Two of the biggest risks related to AI and generative AI are around data privacy and security concerns. Companies looking to adopt generative AI are seeking solutions that protect their data assets and do not require data to be uploaded or shared with foundational models.

Security concerns vary, from injection attacks on prompts to foundational models getting hacked and creating undesirable and inaccurate results. Addressing these security challenges in a disaggregated model environment will require a rethink of security products, giving rise to an increase in investments from companies.”