Responsible AI / Panelist

Amit Shah

Instalily.ai

United States

Amit Shah is the founder and CEO of Instalily.ai, a stealth autonomous AI startup. Previously, he was president of 1-800-Flowers.com. He has been on the Mobile Marketing Association’s board of directors and executive committee since 2013 and joined Blue Apron’s board of directors in 2022. In 2019, he was named CMO of the Year for North America by the Consumer Goods Institute. Shah has also spoken extensively about emerging products and technologies and has had his work featured in numerous publications, including The New York Times, The Wall Street Journal, and Forbes. He has a master’s degree in liberal arts from Harvard University and a bachelor’s degree from Bowdoin College.

Learn more about Shah’s approach to AI via the Me, Myself, and AI podcast.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly agree “Let’s be real: There is enough international alignment on emerging RAI standards for companies to stop making excuses. With frameworks like the EU’s AI Act, OECD principles, and even China’s AI governance codes converging, there is an emerging playbook for ethical AI. If you are not implementing RAI globally, that is a choice, not a full-blown regulatory gap. In fact, waiting for “perfect” alignment is a cop-out. Global companies thrive in complex regulatory environments all the time; AI governance should be no different.

The narrative that there’s no alignment is simply outdated. From ISO standards to cross-border collaborations, the toolkit is there for any serious stakeholder. Sure, there will always be nuanced differences, but forward-thinking organizations are already integrating RAI into their core strategies. If you’re not ready to act now, you’ll be left behind in the global AI race.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Disagree “Mandatory AI disclosures would impede innovation and overburden businesses, especially smaller ones. Rapidly evolving AI technology means that requirements could quickly become outdated, leading to high compliance costs and legal risks. Such disclosures would clutter interfaces and create unnecessary confusion, potentially overwhelming average consumers who lack the technical understanding to interpret this information meaningfully. Consider common AI applications like email spam filtering and search engines: Users care more about functionality than underlying technology. Mandating AI disclosures for these ubiquitous uses could lead to notification fatigue, diminishing the impact of more critical disclosures.

A nuanced approach focusing on specific high-risk AI applications would better balance innovation, business interests, and consumer protection. This could involve industry self-regulation, voluntary transparency, or targeted regulations for AI in sensitive domains. Ultimately, fostering AI literacy through education and promoting responsible AI development practices would be more effective than mandatory disclosures in ensuring ethical AI use and protecting consumer interests.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “As a board member with deep operational experience and now as a founder of an AI startup, I’m intimately familiar with the tightrope walk between seizing AI’s opportunities and mitigating its risks. We’re in an era where AI can be a catalyst for breakthroughs, driving efficiency and innovation and offering a competitive edge in the market. Yet it’s not without its perils. AI data privacy, ethical concerns, and potential biases are just the tip of the iceberg. The key is to navigate these waters with a strategic mindset, integrating tools and systems on AI governance, like those from Credo AI and Ketch, to ensure that AI initiatives align with regulatory standards and ethical norms. It’s about fostering a culture of responsible innovation, where we’re constantly evaluating the impact of our AI systems and ensuring that they serve our stakeholders’ interests and uphold the company’s values. This balance isn’t achieved overnight; it requires ongoing dialogue, education, and adaptation. It’s a dynamic equilibrium that is still in its early stages.”