What to Read Next
Already a member?Sign in
As awareness grows regarding the risks associated with deploying AI systems that violate legal, ethical, or cultural norms, building responsible AI and machine learning technology has become a paramount concern in organizations across all sectors. Individuals tasked with leading responsible-AI efforts are shifting their focus from establishing high-level principles and guidance to managing the system-level change that is necessary to make responsible AI a reality.
Ethics frameworks and principles abound. AlgorithmWatch maintains a repository of more than 150 ethical guidelines. A meta-analysis of a half-dozen prominent guidelines identified five main themes: transparency, justice and fairness, non-maleficence, responsibility, and privacy. But even if there is broad agreement on the principles underlying responsible AI, how to effectively put them into practice remains unclear. Organizations are in various states of adoption, have a wide range of internal organizational structures, and are often still determining the appropriate governance frameworks to hold themselves accountable.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
In order to determine whether, and how, these principles are being applied in practice, and to identify actions companies can take to use AI responsibly, we interviewed 24 AI practitioners across multiple industries and fields. These individuals with responsible AI in their remit included technologists (data scientists and AI engineers), lawyers, industrial/organizational psychologists, project managers, and others. In each interview, we focused on three main phases of transition: the prevalent state, which identified where the organization was as a whole with responsible AI; the emerging state, detailing the practices that individuals who were focused on responsible AI had created though they had not yet been fully integrated into the company; and the aspirational state — the ideal state where responsible AI practices would be common and integrated into work processes.
The State of the Field
Most of the practitioners we interviewed indicated that their companies’ approach to responsible AI has been mostly reactive to date. The primary motivators for acting on ethical threats come from external pressures, such as reputational harm or compliance risk. Often, media attention serves to further internal initiatives for responsible AI by amplifying reputational risk.
This reactive response is driven in part by a lack of metrics to assess the success of responsible-AI initiatives. Without standards for implementation, assessment, and tracking, it is difficult to demonstrate whether an algorithmic model is performing well from a responsibility standpoint.