What to Read Next
In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white.
The conversation became intense. Two well-known AI corporate researchers — Facebook’s chief AI scientist, Yann LeCun, and Google’s co-lead of AI ethics, Timnit Gebru — expressed strongly divergent views about how to interpret the tool’s error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
Bias has plagued the AI field for years, so this particular AI tool’s Black-to-white photo transformation isn’t completely unexpected. However, what the debate made obvious is that not all AI researchers have embraced concerns about diversity. This is a fact that will fundamentally affect any organization that plays in the AI space.
What’s more, there’s a question here that many organizations should pay attention to: Why didn’t it occur to anyone to test the software on cases involving people of color in the first place?
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
We would argue that this is a case of invisibility. Sometimes people of color are present, and we’re not seen. Other times we are missing, but our absence is not noticed. It is the latter that is the problem here.
One Crisis Begets Another
Part of the problem is that there are relatively few Black people and other people of color working in AI. At some of the top technology companies, the numbers are especially bleak: Black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. Gender comparisons are also stark: Globally, only 22% of AI professionals are female, while 78% are male. (The math is simple but worth laying out explicitly.) There is a dearth of diversity in the professoriat as well, which is troubling given that colleges are the primary organizations where AI professionals are trained.
Considering the growing role that AI plays in organizations’ business processes, in the development of their products, and in the products themselves, the lack of diversity in AI and the invisibility of people of color will grow into a cascade of crises, with issues piling one upon another, if these biases are not addressed soon. We’ve already seen companies pull their advertising dollars from Facebook because of its poor handling of hate speech. We’ve seen companies issue moratoriums on the sale of facial recognition software, which has long been recognized as having built-in racial and gender biases.
Frankly, we are at a time when the pandemic crisis and the hasty adaptation of AI to track COVID-19’s spread provide a unique opportunity to institute change within the world of AI. This includes changes to the inherent bias problems caused by the underrepresentation of Black and female professionals, as well as other traditionally underrepresented groups, in the field.
Addressing AI’s Invisibility Problem
We propose that to tackle the general problems of underrepresentation in AI, we can all take some lessons from the specific development model used within the AI field itself. That is, when researchers develop a new AI product, they mitigate bias once they become aware of it and take responsibility for fixing it. The larger issues we’re discussing can be approached with the same mindset.
There are three key leverage points:
1. Recognize that differences matter. In machine learning, it’s not just sufficient to feed diverse data into a learning system. Rather, an AI system also needs to be designed so that it does not disregard data just because it appears to be anomalous based on the small number of data points.
Just as differences in data matter, differences within workforces matter too. The AI approach to embracing diverse data is analogous to the recognition that there is a difference between equality and equity in the workforce: Equality means providing everyone the same resources within an established process, but equity demands paying attention to what happens throughout a process, including examining the fairness of the process itself. Organizations need to pay attention to bringing in diverse voices not just when they’re recruiting people but when they’re strategizing around retention and development as well. Companies need to retain those voices and not disregard them because they’re small in number.
2. Recognize that diversity in leadership matters. In AI and machine learning, ensemble methods — that is, learning systems that combine different kinds of functions, each with its own different biases — have long been credited as often being better performing than completely homogeneous methods. These learning systems are leaders in optimizing AI outcomes and are diverse by design.
The parallel for organizations that want to tackle the underrepresentation of Black and female voices is that having diversity in the leadership also leads to diversity in how problems are recognized and how talent is developed. For example, after the desegregation of schools in the United States that followed the 1954 Brown v. Board of Education court case, the U.S. saw a significant drop in diversity among teachers. There is a direct line between this drop in diversity among the gatekeepers of education and a corresponding drop in Black students being recommended for gifted-and-talented programs. When it comes to who is seen and who is not seen, it matters dramatically who the leaders and gatekeepers are.
3. Recognize that accountability is necessary. In AI and machine learning, a machine learns by means of a loss function — a method for evaluating how well a specific search algorithm models the given data. If predictions deviate too much from actual results, a loss function punishes the learning system accordingly, because without an objective and clear incentive that allows a system to know how it is performing, there is no way to know how it is performing. This is the essence of AI accountability.
Accountability matters when companies purport to be working to fix issues of underrepresentation. Companies have long known that gender and ethnic diversity affects the bottom line. Report after report showcases that companies that lag in gender and ethnic diversity among their workforces, management teams, executives, and boardrooms are less likely to achieve above-average profitability. In our minds, this correlates with a well-known saying in the AI and computing community: “Garbage in, garbage out.”
More simply put, if an organization’s leadership and workforce do not reflect the diverse range of customers it serves, its outputs will eventually be found to be substandard. Because learning algorithms are a part of larger systems composed of other technologies and the people who create them and implement them, bias can creep in anywhere in the pipeline. If the diversity within an organization’s pipeline is low at any point, the organization opens itself up to biases — including ones that are deep enough and, potentially, public enough that they could divide customers and eventually lead to obsolescence and failure. Some customers would stay, but others would leave.
Although companies profess that they’ve tried to address this diversity crisis, the needle has barely moved. Since 2014, when the large tech companies began publishing annual diversity reports, few have made much ground in terms of ethnic diversity. Some have made small gains in gender diversity.
Tech companies have been focused on point problems and point solutions. In other words, they’ve been putting out fires and not addressing fundamental causes. But you can’t just apply a bandage to a gushing wound when a tourniquet is necessary. Organizations and those who truly wish to lead in this area have to stop focusing on just one area, one product, and one controversy. The actual problem is pervasive and systemic, and it demands creative solutions and true accountability.