Responsible AI / Panelist

Kartik Hosanagar

The Wharton School of the University of Pennsylvania

United States

Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a professor of marketing at The Wharton School of the University of Pennsylvania. His research focuses on the digital economy and the impact of analytics and algorithms on consumers and society. Hosanagar is a 10-time recipient of MBA and undergraduate teaching excellence awards at The Wharton School. He is a serial entrepreneur who most recently founded Jumpcut Media, a startup that is using data to democratize opportunities in film and TV. Hosanagar has served as a department editor at the journal Management Science and has previously served as a senior editor at the journals Information Systems Research and MIS Quarterly.

Learn more about Hosanagar’s approach to AI via the Me, Myself, and AI podcast.

Voting History

Statement Response
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “Given the potential for misuse of AI (such as using customer data to train AI without permission) and for potential for biases in AI, disclosure is important. In fact, I believe regulation should require disclosure in some settings, such as loan approvals, recruiting, and policing. In other settings, while it need not be required by regulation, it is important from a consumer trust standpoint to disclose AI use and the kinds of data used to train AI, and its purpose. The disclosures should cover the training data for models, the typical output of these models, and their purpose. Models that use customer data are an example of the kinds of models that should be disclosed.

A good case study on this is Zoom’s use of AI trained on customer conversations, the subsequent pushback from customers, and the company’s reevaluation of its AI disclosure policy.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree “Organizations are treading cautiously and expanding risk management capabilities, especially with regard to risks tied to data privacy and confidentiality. This is due to prior waves of regulations, such as GDPR, that establish privacy best practices. However, there is a range of risks, such as model collapse due to the explosion of synthetic data or prompt injection attacks. These attacks and their implications are not well understood, never mind having well-established best practices.”