
Giuseppe Manai is the cofounder, COO, and chief scientist of Stemly, an autonomous forecasting platform for supply chain and finance companies. A seasoned data scientist with expertise in a variety of industries, he has a proven track record of success in devising and executing data science strategies and translating technical concepts into practical business solutions. Manai teaches applied data science to undergrads at Yale-NUS College and cofounded the Association for Computing Machinery’s SIGKDD chapter in Singapore. He has a doctorate in physics from Trinity College Dublin.
Voting History
Statement | Response |
---|---|
Most RAI programs are unprepared to address the risks of new generative AI tools. Agree | “Most RAI programs are unprepared for two main reasons. On the one hand, being generative, such tools are capable of creating content such as video, images, and text, and this can be harmful if used for malicious purposes. On the other hand, generative tools are evolving rapidly, and the full extent of their capabilities — present and future — is not fully known. Tech companies, research institutes, and government agencies are creating tools to support the development of effective RAI programs by defining principles and providing recommendations for regulations, certification, and supporting legislation. For example, the RAI Institute is a not-for-profit organization dedicated to enabling successful responsible AI efforts in organizations. But RAI programs face some challenges, including privacy concerns, given that data is an important company asset and collecting large amounts creates a privacy concern for individuals, existing biased systems trained on biased data, and the deficiency of governance and accountability, given that it is not clear how to assign accountability to models that provide unethical or biased predictions.” |
RAI programs effectively address the risks of third-party AI tools. Disagree |
“This is a new and evolving field. Developing effective RAI programs is challenging and requires alignment on ethical principles and guidelines. Programs must be flexible and dynamic to keep pace with the evolution of AI tools. Governance frameworks have emerged, providing common ground for the convergence of principles and guidelines. Examples are the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the OECD Principles on Artificial Intelligence.
To take this further, RAI programs could provide certification and monitoring of AI tools. For example, RAI programs could conduct performance testing, checking whether a third-party AI tool performs as it is intended to and whether it meets specific performance requirements without harming users. More specifically, a speech recognition AI tool could be tested for accuracy and speed in transcribing speech without discriminating against certain accents or dialects. Checks could be conducted with a human-in-the-loop approach to ensure that a specialist can verify, check, and keep the tool as close as possible to latest guidelines. Similar examples can be drawn for other aspects, such as explainability, bias, and data privacy testing.” |