Responsible AI / Panelist

David R. Hardoon

UnionBank of the Philippines


David R. Hardoon is managing director at Aboitiz Data Innovation, chief data and innovation officer at Aboitiz Group, chief data and AI officer at UnionBank of the Philippines, and chief data officer of UnionDigital. He is concurrently an external adviser to Singapore’s Corrupt Investigation Practices Bureau and its Central Provident Fund board. Previously, Hardoon was the Monetary Authority of Singapore’s first appointed chief data officer and head of its data analytics group, as well as a special adviser on AI. Hardoon has a doctorate in computer science from the University of Southampton.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Agree “I agree that executives usually think of RAI as a technology issue. The dominant approach undertaken by many organizations toward establishing RAI is a technological one, such as the implementation of platforms and solutions for the development of RAI. Similarly, the slant of the policies is on how AI technology can be used in a responsible manner. In order to elevate ourselves from viewing RAI as a technology issue, it’s important to view the challenges RAI surfaces as challenges that largely exist with or without AI.”
Mature RAI programs minimize AI system failures. Agree “Responsible AI is ultimately about the establishment of governance and controls. A mature RAI program, in my opinion, should cover the breadth of AI in terms of data and modeling for both development and operationalization, thus minimizing potential AI system failures.”
RAI constrains AI-related innovation. Agree “I interpret the nature of “responsible” in responsible AI as a control or oversight function mitigating the possible less-than-favorable implications of the application of AI. Everything must have balance: RAI constrains AI-related innovation as much as traffic regulations constrain the maximum speed at which a car can be driven on roads.

Thus, does RAI constrain AI-related innovation? In the context of AI’s mathematical innovation, yes, to some extent. In the context of the application of AI — it must. Otherwise, RAI would be nothing more than empty words. We constrain our development and application of AI to align with relevant social-cultural contexts and scenarios, mitigating potential harm to ourselves and others.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “We need to remember that, despite its sophistication, AI is just a tool. While there are circumstances where tools require a framework of responsibility, it is ultimately the application and/or use of the tool that bears the accountability of responsibility. For example, it is the job of corporate social responsibility to assure that any output, from AI or otherwise, is to be used in a manner that is justified.”
Responsible AI should be a part of the top management agenda. Neither agree nor disagree “Should responsible AI be a part of the top management agenda? The instinctive and seemingly obvious answer is “yes,” obviously. After all, it is a moral and social responsibility to embed the common traits of responsible AI that are constructed from the principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

I nonetheless hesitate with the instinctive and straightforward “yes,” as I believe it is important to first understand the broader approach that an organization takes toward the traits of responsibility. What is the organization’s conduct and culture? Does it have existing expectations for aspects of fairness, privacy, inclusiveness, and so forth?

The agenda should not be responsible human and/or responsible AI‚ but simply responsible, with a focus on the methodology and governance to ensure that the underpinning traits are upheld.”