David R. Hardoon is the CEO of Aboitiz Data Innovation and chief data and AI officer of Union Bank of the Philippines. He is concurrently the chief data and innovation officer of the Aboitiz Group and chief data officer of UnionDigital Bank. Previously, he was the Monetary Authority of Singapore’s first appointed chief data officer and head of the Data Analytics Group, as well as a special adviser on AI. Hardoon has a doctorate in computer science (machine learning) from the University of Southampton and a bachelor’s degree from Royal Holloway, University of London, in computer science and AI.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly agree | “The key word is “sufficient.” Can there be more, better, alignment? Yes. Will we uncover the need for further alignment? Most likely. However, do we have sufficient alignment on the fundamentals for companies to effectively implement RAI requirements across the organization? Absolutely. We simply need to get on with it, especially as governance and the majority of RAI requirements are not new, even if the instigating domain is.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree | “The foundation of an effective RAI framework is transparency. Just as disclosures are required in all facets of life, such as food products, materials, ESG, and CCTV monitoring, I believe it is vital to have AI-related disclosures — at the very least, the most basic one: that AI is indeed being used to power services and products.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree | “The readiness of organizations to meet the requirements of the EU AI Act will depend on the clarity of the requirements as well as definitions and penalties involved. I believe that readiness will correlate with clarity. Therefore, the question is whether the EU will be able to provide the level of clarity needed over the next 12 months. Similarly, in some sense, all organizations could claim to be ready and compliant: In the absence of clear requirements, organizations can claim they are compliant.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“In general, I do not agree that organizations are sufficiently expanding risk management capabilities to address AI-related risks, due to two underlying reasons:
1. The adoption of AI in operations/production is, in general, still limited. Thus, the need to revisit or expand risk management capabilities, given that operations/business as usual has no material change, is similarly limited. 2. I would advocate that the prevailing approach in organizations that are productizing/operationalizing AI is to first fit models into existing risk frameworks, particularly given that the majority of risks are not AI-specific, such as data governance, quality, operations, etc. In order to expand risk capabilities, there is a need to expand, review, and understand specifically what are the AI risks that differ from non-AI risks.” |
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Disagree |
“Adequate, as defined by Merriam-Webster, is “sufficient for a specific need or requirement ... also: good enough: of a quality that is good or acceptable.” In regulated industries, or where the R in RAI is a regulated construct — then yes, perhaps.
However, largely, the AI and data technology industry is an unregulated industry (with the exception of the vertical regulated industries in which the technology might be applied). Therefore, how can the investments in RAI be broadly, and generally, considered adequate if (1) all companies are not subjected to same levels of standards, (2) we have yet to agree on what is “sufficient” or “acceptable,” or (3) we don’t have a mechanism to verify third parties?” |
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Strongly agree | “RAI is governance, and the management of governance needs to be centralized in order to achieve its intended effectiveness of guidance and oversight. Furthermore, a centralized setup enables standardization, efficiencies of scale and scope, and lower coordination costs. The implementation-operationalization of governance needs to be decentralized and the responsibility of each specific function.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Disagree | “Most RAI programs have the necessary foundations that cover the gamut of AI risks, generative or otherwise — such as context, materiality, human-in/over-the-loop, and explainability. The eloquentness of an AI output should not dissuade or charm us from not applying the appropriate controls and governance. I believe that any “unpreparedness” would arise from how these RAI programs are comprehensively implemented, operationalized, and enforced.” |
RAI programs effectively address the risks of third-party AI tools. Neither agree nor disagree |
“The considerations of a comprehensive responsible AI program would be agnostic with regard to the platform used for development. Thus, whether developed on an internal or third-party AI tool, the governance assessment of AI should be similar.
I do not believe existing RAI programs extend their risk assessments to system-related risks, and, therefore, they would not necessarily cover the respective integration risks that may exist when dealing with third-party AI tools. These risks would usually be covered in policies related to third-party technology and systems integration.” |
Executives usually think of RAI as a technology issue. Agree | “I agree that executives usually think of RAI as a technology issue. The dominant approach undertaken by many organizations toward establishing RAI is a technological one, such as the implementation of platforms and solutions for the development of RAI. Similarly, the slant of the policies is on how AI technology can be used in a responsible manner. In order to elevate ourselves from viewing RAI as a technology issue, it’s important to view the challenges RAI surfaces as challenges that largely exist with or without AI.” |
Mature RAI programs minimize AI system failures. Agree | “Responsible AI is ultimately about the establishment of governance and controls. A mature RAI program, in my opinion, should cover the breadth of AI in terms of data and modeling for both development and operationalization, thus minimizing potential AI system failures.” |
RAI constrains AI-related innovation. Agree |
“I interpret the nature of “responsible” in responsible AI as a control or oversight function mitigating the possible less-than-favorable implications of the application of AI. Everything must have balance: RAI constrains AI-related innovation as much as traffic regulations constrain the maximum speed at which a car can be driven on roads.
Thus, does RAI constrain AI-related innovation? In the context of AI’s mathematical innovation, yes, to some extent. In the context of the application of AI — it must. Otherwise, RAI would be nothing more than empty words. We constrain our development and application of AI to align with relevant social-cultural contexts and scenarios, mitigating potential harm to ourselves and others.” |
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree | “We need to remember that, despite its sophistication, AI is just a tool. While there are circumstances where tools require a framework of responsibility, it is ultimately the application and/or use of the tool that bears the accountability of responsibility. For example, it is the job of corporate social responsibility to assure that any output, from AI or otherwise, is to be used in a manner that is justified.” |
Responsible AI should be a part of the top management agenda. Neither agree nor disagree |
“Should responsible AI be a part of the top management agenda? The instinctive and seemingly obvious answer is “yes,” obviously. After all, it is a moral and social responsibility to embed the common traits of responsible AI that are constructed from the principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
I nonetheless hesitate with the instinctive and straightforward “yes,” as I believe it is important to first understand the broader approach that an organization takes toward the traits of responsibility. What is the organization’s conduct and culture? Does it have existing expectations for aspects of fairness, privacy, inclusiveness, and so forth? The agenda should not be responsible human and/or responsible AI‚ but simply responsible, with a focus on the methodology and governance to ensure that the underpinning traits are upheld.” |