Responsible AI / Panelist

Jaya Kolhatkar

Hulu

United States

Jaya Kolhatkar is chief data officer at Hulu and executive vice president of data for Disney Streaming, where she oversees customer intelligence and all data- and analytics-related efforts. She previously served as the senior vice president of global data and analytics platforms at Walmart Labs and was the chief analytics officer and cofounder of the predictive analytics firm Inkiru, which Walmart Labs acquired. Kolhatkar has also held analytics roles at PayPal, eBay, and Amazon. She has an MBA from Villanova University.

Voting History

Statement Response
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree “There have been some recent developments in the new generative AI tools that showcase some of the more general-purpose AI tools that we may not have considered as part of an organization’s larger responsible AI initiative. These new AI tools are very nascent, and organizations will have to evaluate and strategize as to how to understand the uses/misuses of these tools on a deeper level.”
RAI programs effectively address the risks of third-party AI tools. Agree “As a larger company, we have the luxury of leveraging in-house AI efforts. We have chosen to build out that capability in lieu of using third-party tools. For companies that rely on third-party tools for RAI, that definitely gives them an advantage, as these tools are focused on RAI.

I define third-party tools in two ways:

1. A service/provider that takes an organization’s data and runs prebuilt models to operationalize tasks or insights. It typically operates as a more opaque system.

2. Tools that provide infrastructure to build custom AI solutions. I see these as platforms with no inherent ability to influence the RAI efforts other than to create ease and efficiency in developing them.”
Executives usually think of RAI as a technology issue. Strongly agree “Executives should consistently and proactively think about how they are leveraging RAI. AI and responsible AI are intertwined with each other, and as executives think about RAI, they should continue to strategically embed this in their technology and overall company goals.”
Mature RAI programs minimize AI system failures. Strongly agree “A prerequisite to building responsible AI is a strong thought process that allows an organization to assess and prepare for failures that could occur. To run a responsible AI program correctly, there needs to be a mechanism that catches failures faster and a strong QA process to minimize overall issues.”
RAI constrains AI-related innovation. Disagree “Responsible AI should be embedded within AI innovation. Scalability and progress become limited if you do not have responsibility integrated in every step of innovation.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Neither agree nor disagree “Tying responsible AI efforts to a company’s corporate social responsibility efforts is largely dependent on whether a company’s use of AI is linked to its social responsibility. Each company needs to evaluate this relationship on a case-by-case basis based on the company’s charter.”
Responsible AI should be a part of the top management agenda. Agree “At Disney Streaming, top management should be cognizant of the need for responsible AI and have regular check-ins to make sure that business teams are aware of the need. Given that our storytelling has the power to shape views and impact society, we need to ensure that our use of AI represents both our content and the audience inclusively.”