Responsible AI / Panelist

Tom Mooney

Harman

United States

Tom Mooney is the global head of government affairs and geopolitics at Harman, an automotive technology and connected-audio company that is part of Samsung Electronics. He oversees full-spectrum corporate government affairs and geopolitical initiatives affecting Harman’s business interests and serves as the primary interface between Harman and Samsung on related matters. Before joining Harman, Mooney served a diverse array of government and commercial clients as a technology leader with Booz Allen Hamilton. He is a veteran of the U.S. Army’s 75th Ranger Regiment as an enlisted infantryman with numerous combat deployments in support of U.S. foreign policy and national security objectives.

Voting History

Statement Response
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree “There are so many nuances and complexities that go into each organization’s maturity, business model, and specific risk exposure overall related to complying with the EU AI Act. My sense is that most, not all, organizations will prioritize taking an initial inventory of their exposure and then achieving the lowest common denominator to comply. However, like other issues, such as privacy and sustainability, it's likely we’ll see some companies plant their flag on AI and “over-comply” to build and market a responsible AI platform as competitive differentiator. Others, probably a fringe demographic, will play “wait and see” and assess how political winds may or may not shift in an election year and, ultimately, how enforcement plays out. It took years for GDPR to ramp up its enforcement mechanism, which, if something similar plays out with the AI Act, could buy companies more time to navigate a compliance approach.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “Innovation always outpaces governance, but in the face of mounting regulatory, market, and social pressures, organizations are certainly paying attention to the risk environment and climate around AI. If they weren’t considering AI in their risk profiles before, they are now. Will their efforts be “sufficient”? Hard to say. There are many factors feeding into a company’s exposure profile that determine what is effective or sufficient. Will their efforts be compliant? Probably. More importantly, though, organizations need an AI risk management framework to avoid hype-cycle-driven decision-making extremes that create systemic shocks — in other words, an investment-paused-indefinitely Tyler Perry Studios/Sora-threat moment, or leadership overestimating and overinvesting in a belief that a simple AI injection melts away inefficiencies like an SG&A [selling, general, and administrative expenses] Ozempic.”