Rashmi Gopinath is a general partner at B Capital Group, where she leads the fund’s global enterprise software practice across cloud infrastructure, cybersecurity, DevOps, and AI/machine learning. She was previously a managing director at M12, Microsoft’s venture fund, where she led investments in enterprise software and sat on several boards, including Synack, Innovaccer, Contrast Security, Frame, UnravelData, and Incorta. Before joining M12, Gopinath was an investment director with Intel Capital and held operating roles at high-growth startups, such as BlueData and Couchbase.
Voting History
Statement | Response |
---|---|
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree |
“The EU AI Act is the first comprehensive regulatory standard created for the use of AI worldwide to ensure that AI systems are compliant with ethical principles, safety, and user rights. One of the biggest challenges with current GenAI technologies is the lack of integrated safety and regulatory checks within these products to ensure that decisions or outcomes that result from these systems are free of bias and safety concerns.
The lack of transparency for training data sets used for currently available large language models, as well as use of these training data sets without adequate validation of their accuracy, results in these models producing inaccurate results and responding with hallucinatory results when training data is missing. These challenges can be addressed with creating smaller, contextual models that are rigorously trained with proprietary and comprehensive data sets that are relevant for that specific industry or use case. We also need to see more vendors offer integrated solutions that minimize bias, enhance explainability, and mitigate hallucinations to create a trusted GenAI solution in order to meet regulatory requirements of AI standards such as the EU AI Act.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree | “There are a number of AI-related risks due to the lack of model explainability, lack of visibility into underlying training data, and a lack of transparency in the software supply chain. Moreover, regulatory changes will add additional compliance risks for enterprises leveraging AI decisioning models that touch consumer use cases. Today, most AI budgets are applied toward experimenting with large language models and talent acquisition versus investing in data privacy, security, and risk management solutions. It is highly likely that enterprises will be held responsible by legal courts for any damages caused to consumers due to errors or risks in underlying AI systems. A case in point was the recent Air Canada incident, where the tribunal courts ruled in favor of the airline customer for a faulty AI chatbot system that had applied a nonstandard discount that Air Canada had not honored, claiming that its AI system had mistakenly offered the discount. There will be higher penalties associated with consumer damages related to biases and ethical decisions made by AI systems.” |
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Agree |
“Two of the biggest risks related to AI and generative AI are around data privacy and security concerns. Companies looking to adopt generative AI are seeking solutions that protect their data assets and do not require data to be uploaded or shared with foundational models.
Security concerns vary, from injection attacks on prompts to foundational models getting hacked and creating undesirable and inaccurate results. Addressing these security challenges in a disaggregated model environment will require a rethink of security products, giving rise to an increase in investments from companies.” |