Responsible AI / Panelist

Tae Wan Kim

Carnegie Mellon University

United States

Tae Wan Kim is an associate professor of business ethics and the Xerox Junior Faculty Chair at Carnegie Mellon’s Tepper School of Business. He is also a faculty member of the Block Center for Technology and Society at Heinz College and Carnegie Mellons CyLab. Kim is on multiple editorial boards and has served as a committee member of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and the AAAI/ACM Conference on Artificial intelligence, Ethics, and Society. He has a doctorate in business ethics and legal studies from The Wharton School.

Voting History

Statement Response
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Disagree “Much of CSR is strategy, but responsible AI is mandatory. It’s true that ethics can be used as a strategy for reputation management, and companies do so under the name of CSR. However, such a perspective is insufficient. Take Enron, for example, a company that was engaged in several philanthropic (beyond duty) activities while it was also recklessly infringing upon ethical duties around deception and manipulation. Of course, there are wide overlaps between what is right and what is profitable, which makes ethics a viable strategy. And yet some ethical values cannot be translated into the language of strategy as a specialty tool. If companies regard ethics purely as a strategy, they can fall into the trap, “If the only thing one has is a hammer, then everything starts to look like a nail.” Relatedly, CSR efforts must be visible to customers, but much of responsible AI efforts is technical and difficult to explain to customers. Note that CSR has not been as viable in B2B as much as B2C, because doing right and good in B2C is visible to customers, whereas the visibility does not exist in B2B.”
Responsible AI should be a part of the top management agenda. Neither agree nor disagree “How top management behaves makes a big impact on low-ranking employees. But we need more evidence about whether having responsible AI as part of the top management agenda makes a positive or negative impact. Strengthening independence of an internal ethics team is important, and there can be trade-offs. Think about a recent case at Google. Google, which probably has the largest ethical AI team, recently fired an African American woman, who was a machine learning researcher in the ethics team, because of her paper that studied ethical problems of large language models. This decision put Google into moral turmoil and provoked public outcry. The top management’s involvement with the ethics team turned out to be negative.”