Responsible AI / Panelist

Simon Chesterman

National University of Singapore

Singapore

Simon Chesterman is David Marshall Professor and vice provost, National University of Singapore, and senior director of AI governance at AI Singapore. He is also editor of the Asian Journal of International Law and co-president of the Law School’s Global League. Previously, he was global professor and director of the New York University School of Law’s Singapore program, a senior associate at the International Peace Academy, and director of U.N. relations at the International Crisis Group. Chesterman has taught at several universities and is the author or editor of 21 books.

Voting History

Statement Response
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree “The gold rush around generative AI has led to a downsizing of safety and security teams in tech companies, and a shortened path to market for new products. This is driven primarily by the perceived benefits of AI, but risks are not hard to see. In the absence of certainty about who will bear the costs of those risks, fear of missing out is triumphing — in many organizations, if not all — over risk management. A key question for AI governance and ethics in the coming years is going to be structural: Where in the organization is AI risk assessed? If it is left to IT or, worse, marketing, it will be hard to justify investments in RAI. I suspect it will take a few major scandals to drive a realignment, analogous to some of the big data breaches that elevated data protection from “nice to have” to “need to have.””
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Neither agree nor disagree “Of course, it depends. AI is increasingly going to be deployed across entire business ecosystems. Rather than being confined to an IT department, it will be more like finance: Though many organizations have chief financial officers, responsibility for financial accountability isn't limited to him or her. Strategic direction and leadership may reside in the C-suite, but operationalizing RAI will depend on those deploying AI solutions to ensure appropriate levels of human control and transparency so that true responsibility is even possible.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Strongly disagree “Any RAI program that is unable to adapt to changing technologies wasn’t fit for purpose to begin with. The ethics and laws that underpin responsible AI should be, as far as possible, future-proof — able to accommodate changing tech and use cases. Moreover, generative AI itself isn’t the problem; it’s the purposes for which it is deployed that might cross those ethical or legal lines.”
RAI programs effectively address the risks of third-party AI tools. Strongly disagree “We’re still at the early stages of AI adoption, but one of the biggest problems is that we don’t know what we don’t know. The opacity of machine learning systems in particular makes governance of those black boxes challenging for anyone. That can be exacerbated by the plug-and-play attitude adopted with respect to many third-party tools.”
Executives usually think of RAI as a technology issue. Agree “Responsible AI is presently seen by many as “nice to have.” Yet, like corporate social responsibility, sustainability, and respect for privacy, RAI is on track to move from being something for IT departments or communications to worry about to being a bottom-line consideration — a “need to have.””
Mature RAI programs minimize AI system failures. Strongly agree “RAI focuses more on what AI should do rather than what it can do. But if an organization is intentional about its use of AI systems, adoption of human-centered design principles, and testing to ensure that those systems do what they are supposed to, the overall use of AI by that organization is going to be more effective as well as more legitimate.”
RAI constrains AI-related innovation. Disagree AI is such a broad term that requirements that it be used “responsibly” will have minimal impact on how the fundamental technology is developed. The purpose of RAI is to reap the benefits of AI while minimizing or mitigating the risk — designing, developing, and deploying AI in a manner that helps rather than harms humans. Arguments that this constrains innovation are analogous to saying that bans on cloning humans or editing their DNA constrain genetics.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly disagree “One of the longstanding concerns about corporate social responsibility was that it would locate questions of accountability in the marketing department rather than the legal or compliance department. Over the years, CSR has become a more serious enterprise, with meaningful reporting and targets. We now see larger ESG obligations and “triple bottom line” reporting. But all this is distinct from responsible AI. There may be overlaps, but responsible AI involves narrower targets to develop and deploy AI in a manner that benefits humanity. A particular challenge is the many unknown unknowns in AI, meaning that what is responsible may sometimes involve conditions of uncertainty and self-restraint rather than adhering to externally set metrics.”
Responsible AI should be a part of the top management agenda. Agree “Not every industry will be transformed by AI. But most will be. Ensuring that the benefits of AI outweigh the costs requires a mix of formal and informal regulation, top-down as well as bottom-up. Governments will be a source of regulations with teeth. As industries have discovered in the context of data protection, however, the market can also punish failures to manage technology appropriately.”