Responsible AI / Panelist

Steven Vosloo

UNICEF

United States

Steven Vosloo is UNICEF’s digital policy specialist in the Office of Global Insight and Policy, leading work at the intersection of children and their digital lives, including AI, digital literacy, and misinformation. Previously, he developed guidelines for UNESCO on how technology can be better designed for youth and adults with low literacy and low digital skills, established and led the organizations mobile learning program, and coauthored Policy Guidelines for Mobile Learning. Prior to that, Vosloo was head of mobile in the Innovation Lab at Pearson South Africa.

Voting History

Statement Response
Executives usually think of RAI as a technology issue. Neither agree nor disagree “It is important for executives not to see RAI as purely a technology issue. Responsible use of technology should represent an organization’s approach to innovation and be embedded in its strategies, processes, and roles — such as human and child rights experts, impact assessments, and, of course, the tech itself.”
Mature RAI programs minimize AI system failures. Disagree “I would rather say that mature RAI programs help to reduce AI system failures, since even if developed responsibly and not to do harm, AI systems can still fail. This could be due to limitations in algorithmic models, poorly scoped system goals, or problems with integration with other systems.”
RAI constrains AI-related innovation. Disagree “Responsible AI, when done well, does not constrain innovation. In fact, working to create clear processes that provide guardrails for how to develop AI responsibly can help to focus the innovation process. In addition, for corporates, it makes good business sense. As noted in the UNICEF Policy Guidance on AI for Children, “As consumers and the wider public make greater demands for technology services to have the right safeguards in place, business should capitalize on this market opportunity and thereby also mitigate against corporate reputational risks for AI-related harms.””
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Disagree “The challenge is that corporate social responsibility efforts can change as companies focus on different issues over time. Responsible AI is an evergreen issue that needs to be anchored in the core functioning of the organization. That is not to say that corporate social responsibility does not have a role to play, for example, in working on issues such as greater AI skills for women and girls or amplifying youth voices through consultations. But these efforts should stem from a responsible AI code that is embedded in the core of the organization and has both an internal and externally facing impact.”
Responsible AI should be a part of the top management agenda. Strongly agree “Commitment to implementing responsible AI has to come from the top. It is not enough to expect product managers and software developers to make difficult decisions around the responsible design of AI systems when they are under constant pressure to deliver on corporate metrics. They need a clear message from top management on where the company’s priorities lie and that they have support to implement AI responsibly.

Having responsible AI as an ongoing agenda item will help keep the topic and commitment fresh in mind. But beyond that, capacity-building on AI and human and child rights for top management must also happen. In this way, they can have a clearer understanding of rights, potential impacts on those rights — positive and negative — and their role in managing the responsible AI agenda.”