Responsible AI / Panelist

Tshilidzi Marwala

University of Johannesburg

South Africa

Professor Tshilidzi Marwala is vice chancellor of the University of Johannesburg, where he also served as deputy vice chancellor and dean of engineering. Previously, he was a full professor and held two endowed chairs at the University of the Witwatersrand. He has published more than 20 books and over 300 papers, holds three international patents, and has received more than 45 awards, including the President’s Award from the National Research Foundation. He is a board member of Nedbank and a trustee of the Nelson Mandela Foundation. He has a doctorate in engineering from University of Cambridge.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Disagree “Executives tend to think of RAI as a distraction. In fact, many of them know very little about RAI and are preoccupied with profit maximization. Until RAI finds its way toward regulations, it will remain in the periphery.”
Mature RAI programs minimize AI system failures. Agree “RAI is based on a set of specifications, procedures, and guidelines to develop trustworthy AI systems. If we consider that the objective of RAI is trustworthiness based on the principles of accountability, justice, transparency, and responsibility, then it stands to reason that systems programmed in this manner are less susceptible to system failures. RAI also implies responsible use of AI systems, which bolsters this argument. While this does aid in creating more responsible and ethical systems, there is scope for a more standardized approach to RAI guidelines to ensure conformity in standards, which would go a long way toward ensuring that there is a systematic and blanket approach to any challenges that may arise.”
RAI constrains AI-related innovation. Strongly disagree “It is important to note that we are seeing incredible instances of innovation with AI, which is addressing our health care concerns, bridging our linguistic and cultural gaps, and laying down the foundations for a more equitable world. That is not to say that there is no room for the misuse of AI. We have already seen instances of this with harmful biases present in some AI machines.

In 2015, dozens of AI experts signed an open letter that warned that “we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide.” With the rapid and far-reaching impact of AI, there is a corresponding exigency for explorations of ethics, justice, fairness, equity, and equality. A focus on regulation, ethics, and cultural aspects of the internet is key, not only to create an enabling policy environment to support private and nongovernmental organizations as well as the state, but to ensure ethical and transparent use of these new technologies. This does not hamper innovation holistically but addresses concerns around harmful innovation.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “Responsible AI is in many ways a human rights issue. When AI harms people, even when it is creating financial value to the shareholders, we should be concerned. As we embed AI in all aspects of organizations, it is important to continuously investigate the cost-benefit consequences. Naturally, the costs should include how responsible the deployed AI is. AI algorithms often arrive at organizations neutral and become biased based on the data they gain while they are used. It is therefore important to always keep in mind that much of the work to ensure responsible AI is done by people in organizations, and therefore responsible AI should become part of the culture in organizations.”
Responsible AI should be a part of the top management agenda. Strongly agree “Responsible AI is about the protection of people and remaining within the bounds of the law, so it is important.”