Shilpa Prasad is an experienced leader in the startup ecosystem, with a strong passion for entrepreneurship and corporate startup engagement/innovation. She brings over 15 years of global experience in corporate startup strategy and venture-building to her role as entrepreneur in residence at LG Nova, where she is helping the company identify market trends and build partnerships. Prasad has been instrumental in driving innovation for various open innovation projects across corporations, governments, and accelerators and in establishing new partnerships across the global startup ecosystem. An experienced founder, she has also contributed her time to mentoring and advising startups.
Learn more about Prasad’s approach to AI via the Me, Myself, and AI podcast.
Voting History
Statement | Response |
---|---|
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Strongly agree |
“The penalties for noncompliance with the EU AI Act are significant and can have a severe impact on the provider’s or deployer’s business. The range is anywhere from 7.5 million euros to 35 million euros [$8 million to $37 million], or 1% to 7% of the global annual turnover, depending on the severity of the infringement. Hence, stakeholders will be sure to understand the AI Act fully and comply with its provisions.
The reality is that it may happen in phases based on the risk levels associated with whether the organization is a provider, deployer, importer, distributor, or affected person of AI systems.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“AI technologies have significant potential to transform society and people’s lives, from commerce and health to transportation and cybersecurity to the environment and our planet. AI technologies, however, also pose risks that can negatively impact the stakeholders. Like risks with other technology, AI risks can emerge in a variety of ways and can be characterized as long or short term, high or low probability, systemic or localized, and high or low impact.
While there are standards and best practices to help organizations mitigate the risks of traditional software or information-based systems, the risks posed by AI systems are in many ways unique. AI systems, for example, may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand. AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently sociotechnical, influenced by societal dynamics and human behavior.” |