Responsible AI / Panelist

Aisha Naseer

Huawei Technologies Co. (UK) Ltd.

United Kingdom

Aisha Naseer is director of research at Huawei Technologies (UK), providing guidance around the standardization of technology development and research within the realms of AI ethics, trustworthiness, and data governance. She also contributes to strategic planning and direction for Huawei’s AI corporate strategy in the U.K. and the European Union. Previously, Naseer led Fujitsu Research of Europe’s AI ethics research program. She is a founding editorial board member of the journal AI and Ethics and was recognized as one of the top 100 Brilliant Women in AI Ethics in 2022.

Voting History

Statement Response
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree “Even though AI’s benefits outweigh its potential harms, there is raised awareness of RAI among business communities, and companies are urged to make investments in the RAI programs. However, it is not evident whether these investments are adequate or enough, as these range from trivial to huge funds being allocated for the control of AI and associated risk mitigation.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree “Future societies hold enormous potential for innovative and autonomous systems that are based on generative AI. Most innovations are realized to sustain social responsibilities and by design are not negligent toward societal obligations. However, we should not underestimate the risks generative AI might incur if it’s uncontrolled, unregulated, or not encapsulated within the principles of RAI.  

For the industry sector, it is imperative to focus on the interpolation of actors within the AI ecosystem so that sustainable research is conducted to address the socioeconomic needs of future societies. To that end, more research is needed to further establish RAI programs; also, it is essential to understand how the ecosystem is set up to strike the right balance among various factors and paradigms.”
RAI programs effectively address the risks of third-party AI tools. Agree “The risk of using or integrating third-party AI tools is imminent. The concept of RAI encapsulates pertinent governance mechanisms around AI and the relevant scrutiny exercises amid various supply chains. These include tracking mechanisms for the purposes of auditing and reporting. When adequate AI governance mechanisms are embedded in organizations’ operational supply chains, they eventually become value chains‚ generating responsible business outcomes. Since the supply chain governance is multidimensional, it involves multiple actors and an enormous ecosystem of stakeholders, such as regulators, policy makers, AI vendors, and numerous user groups/AI communities. Given that RAI is reflected within organizations’ normal business practices, it is very likely that the risks of using or integrating third-party AI tools could be minimized to a large extent, but it is not 100% foolproof. This is because tracing the flow of third-party transactions requires defining sets of rules, norms, and responsibilities accordingly. Hence, traceable relationship management among several supply chain actors is crucial to the effectiveness of RAI programs.”
Executives usually think of RAI as a technology issue. Agree “Although most executives consider RAI a technology issue, due to recent efforts around generating awareness on this topic, the trend is now changing.

Let’s acknowledge the fact that “responsible business” and “responsible AI” are not identical but strongly interlinked. Companies that do not deal with AI in terms of either their business/products (meaning they sell non-AI goods/services) or operations (that is, they have fully manual organizational processes) may not pay any heed or cater the need to care about RAI, but they still must care about responsible business. Hence, it depends on the nature of their business and the extent to which AI is integrated into their organizational processes.”
Mature RAI programs minimize AI system failures. Agree “The maturity level of RAI programs plays a crucial role in contemplating AI system failures; however, there is no guarantee that such calamities will not happen. This is due to multiple potential causes of AI system failure, including environmental constraints and contextual factors — for example, striking the right balance between individualism and socialism, which are two sides of the same coin that are usually miscalculated or undermined.

Notably, RAI programs do conditionally impact the development of AI systems with the implantation of the trustworthiness characteristics within systems’ functionality, such as accuracy, transparency, reliability, and AI system quality. Adherence to and considerate orchestration of these features is vital to avoid adverse effects and ensure that effective accountability and redress mechanisms are in place eventually to retain public trust.

Moreover, to minimize failures in high-risk AI systems, simply having mature RAI programs is insufficient to assure AI system quality or security aspects that heavily contribute to AI system failures. It is important that RAI programs do conform and are aligned with the harmonized AI standardization activities.”
RAI constrains AI-related innovation. Disagree “With responsible AI, the potential of innovative (autonomous) AI systems gets raised rather than being restricted or constrained. As a phenomenon, most innovations are realized for the social good, which by its intended purpose and nature is not negligent toward the societal obligations it is designed to accomplish. Correspondingly, the AI-related innovations hold the potential to sustain social responsibilities by embedding the “trustworthiness characteristics” within these AI systems.

An interesting perspective is around various trade-offs under the context of responsible AI, such as the trade-off between fairness and fidelity, or between accuracy and interpretability. These trade-offs may incur arguments that favor encumbrance of RAI on AI-related innovations; for example, the more accurate an AI model, the less interpretable it might be from a human understanding point of view. However, this does not hamper the ways in which innovative AI models are diversified and trained, encompassing the core principle of ethical and trustworthy AI.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “From an organizational perspective, the practical implementation of the responsible AI initiative needs to be closely aligned to corporate social responsibility efforts. It involves having the sense of responsibility embedded at the core of organizational values and at the heart of business intonations. Moreover, responsible AI needs to be reflected in the organization’s social and business practices that define the set of rules, norms, and its responsibilities accordingly. In doing so, it is imperative to ensure that suitable governance mechanisms are installed within the organization’s operational supply chains, turning them into value chains that steer responsible business outcomes.”
Responsible AI should be a part of the top management agenda. Strongly agree “Considering the ramifications of AI and its applications that are being proliferated into the public or general communities, it becomes peremptory for organizations to ensure that adequate considerations are made around responsible AI within the realms of their corporate functions. This requires the topic of responsible AI as part of an organizational agenda, not only at the top management level but also at other levels — i.e., encapsulating operational to strategic.

Although the scale and depth to be covered around responsible AI greatly depend on the nature of the organization and/or business, it needs to be on their mandate and adopted like a culture. Such a paradigm shift (or cultural change) could potentially make significant contributions to perseverance and responsible business.”