Responsible AI / Panelist

Giuseppe Manai



Giuseppe Manai is the cofounder, COO, and chief scientist of Stemly, an autonomous forecasting platform for supply chain and finance companies. A seasoned data scientist with expertise in a variety of industries, he has a proven track record of success in devising and executing data science strategies and translating technical concepts into practical business solutions. Manai teaches applied data science to undergrads at Yale-NUS College and cofounded the Association for Computing Machinery’s SIGKDD chapter in Singapore. He has a doctorate in physics from Trinity College Dublin.

Voting History

Statement Response
As the business community becomes more aware of AI’s risks, companies are making adequate investments in RAI. Neither agree nor disagree “There is evidence to suggest that companies are becoming more aware of the risks associated with AI and are making investments in responsible AI programs. In my view, responsible AI is not just the morally right thing to do — it also yields tangible benefits in accelerating innovation and helping organizations transition to using AI to become more competitive. A prioritization approach that begins with low-effort, high-impact areas in responsible AI can minimize risk while maximizing investment. To support companies, Singapore’s government has launched two new initiatives — the National AI Programme in Government and the National AI Programme in Finance — as part of a national AI strategy that aims to catalyze AI adoption. The government has also announced an additional $180 million investment for AI research. These examples show that there is a growing awareness of the importance of responsible AI and that investments are being made in this area. On the other hand, it is difficult to determine the extent to which companies are making adequate investments in responsible AI programs, as there is a large spectrum of awareness and relevance; companies are facing different challenges at the current time and might not prioritize RAI programs.”
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Neither agree nor disagree “In a centralized approach, a designated unit sets ethical guidelines and standards for AI implementations across the organization. This unit acts as a focal point for RAI initiatives, equipped with specialized expertise in AI ethics and compliance. Their main task is to develop ethical frameworks that all other units and business functions must adhere to when implementing AI solutions. By having a central focal point, the organization can ensure consistency and coherence in the application of ethical principles and compliance measures, promoting a strong ethical culture.

In a decentralized approach, individual business units would have the autonomy to develop their own strategies. This level of decentralization generates a risk of likely inconsistencies in how RAI is practiced across different units, leading to fragmented or conflicting practices. Striking a balance between centralized oversight and decentralized decision-making can enable organizations to harness the benefits of adaptability while ensuring a cohesive ethical approach. Regular communication channels and knowledge-sharing sessions are needed to facilitate collaboration and understanding among teams, promoting effective RAI management throughout the organization.”
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree “Most RAI programs are unprepared for two main reasons. On the one hand, being generative, such tools are capable of creating content such as video, images, and text, and this can be harmful if used for malicious purposes. On the other hand, generative tools are evolving rapidly, and the full extent of their capabilities — present and future — is not fully known. Tech companies, research institutes, and government agencies are creating tools to support the development of effective RAI programs by defining principles and providing recommendations for regulations, certification, and supporting legislation. For example, the RAI Institute is a not-for-profit organization dedicated to enabling successful responsible AI efforts in organizations. But RAI programs face some challenges, including privacy concerns, given that data is an important company asset and collecting large amounts creates a privacy concern for individuals, existing biased systems trained on biased data, and the deficiency of governance and accountability, given that it is not clear how to assign accountability to models that provide unethical or biased predictions.”
RAI programs effectively address the risks of third-party AI tools. Disagree “This is a new and evolving field. Developing effective RAI programs is challenging and requires alignment on ethical principles and guidelines. Programs must be flexible and dynamic to keep pace with the evolution of AI tools. Governance frameworks have emerged, providing common ground for the convergence of principles and guidelines. Examples are the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the OECD Principles on Artificial Intelligence.

To take this further, RAI programs could provide certification and monitoring of AI tools. For example, RAI programs could conduct performance testing, checking whether a third-party AI tool performs as it is intended to and whether it meets specific performance requirements without harming users. More specifically, a speech recognition AI tool could be tested for accuracy and speed in transcribing speech without discriminating against certain accents or dialects. Checks could be conducted with a human-in-the-loop approach to ensure that a specialist can verify, check, and keep the tool as close as possible to latest guidelines. Similar examples can be drawn for other aspects, such as explainability, bias, and data privacy testing.”