Responsible AI / Panelist

Simon Chesterman

National University of Singapore

Singapore

Simon Chesterman is dean and the Provost’s Chair professor at the National University of Singapore Faculty of Law, and senior director of AI governance at AI Singapore. He is also editor of the Asian Journal of International Law and co-president of the Law School’s Global League. Previously, he was global professor and director of the New York University School of Law’s Singapore program, a senior associate at the International Peace Academy, and director of U.N. relations at the International Crisis Group. Chesterman has taught at several universities and is the author or editor of 21 books.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Agree “Responsible AI is presently seen by many as “nice to have.” Yet, like corporate social responsibility, sustainability, and respect for privacy, RAI is on track to move from being something for IT departments or communications to worry about to being a bottom-line consideration — a “need to have.””
Mature RAI programs minimize AI system failures. Strongly agree “RAI focuses more on what AI should do rather than what it can do. But if an organization is intentional about its use of AI systems, adoption of human-centered design principles, and testing to ensure that those systems do what they are supposed to, the overall use of AI by that organization is going to be more effective as well as more legitimate.”
RAI constrains AI-related innovation. Disagree AI is such a broad term that requirements that it be used “responsibly” will have minimal impact on how the fundamental technology is developed. The purpose of RAI is to reap the benefits of AI while minimizing or mitigating the risk — designing, developing, and deploying AI in a manner that helps rather than harms humans. Arguments that this constrains innovation are analogous to saying that bans on cloning humans or editing their DNA constrain genetics.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly disagree “One of the longstanding concerns about corporate social responsibility was that it would locate questions of accountability in the marketing department rather than the legal or compliance department. Over the years, CSR has become a more serious enterprise, with meaningful reporting and targets. We now see larger ESG obligations and “triple bottom line” reporting. But all this is distinct from responsible AI. There may be overlaps, but responsible AI involves narrower targets to develop and deploy AI in a manner that benefits humanity. A particular challenge is the many unknown unknowns in AI, meaning that what is responsible may sometimes involve conditions of uncertainty and self-restraint rather than adhering to externally set metrics.”
Responsible AI should be a part of the top management agenda. Agree “Not every industry will be transformed by AI. But most will be. Ensuring that the benefits of AI outweigh the costs requires a mix of formal and informal regulation, top-down as well as bottom-up. Governments will be a source of regulations with teeth. As industries have discovered in the context of data protection, however, the market can also punish failures to manage technology appropriately.”