Artificial intelligence has had some justifiably bad press recently. Some of the worst stories have been about systems that exhibit racial or gender bias in facial recognition applications or in evaluating people for jobs, loans, or other considerations.1 One program was routinely recommending longer prison sentences for blacks than for whites on the basis of the flawed use of recidivism data.2
But what if instead of perpetuating harmful biases, AI helped us overcome them and make fairer decisions? That could eventually result in a more diverse and inclusive world. What if, for instance, intelligent machines could help organizations recognize all worthy job candidates by avoiding the usual hidden prejudices that derail applicants who don’t look or sound like those in power or who don’t have the “right” institutions listed on their résumés? What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans? In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand?
AI can do all of this — with guidance from the human experts who create, train, and refine its systems. Specifically, the people working with the technology must do a much better job of building inclusion and diversity into AI design by using the right data to train AI systems to be inclusive and thinking about gender roles and diversity when developing bots and other applications that engage with the public.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Design for Inclusion
Software development remains the province of males — only about one-quarter of computer scientists in the United States are women3 — and minority racial groups, including blacks and Hispanics, are underrepresented in tech work, too.4 Groups like Girls Who Code and AI4ALL have been founded to help close those gaps. Girls Who Code has reached almost 90,000 girls from various backgrounds in all 50 states,5 and AI4ALL specifically targets girls in minority communities.
1. L. Hardesty, “Study Finds Gender and Skin-Type Bias in Commercial Artificial Intelligence Systems,” MIT News Office, Feb. 11, 2018.
2. E.T. Israni, “When an Algorithm Helps Send You to Prison,” The New York Times, Oct. 26, 2017.
3. L. Camera, “Women Can Code — as Long as No One Knows They’re Women,” U.S. News & World Report, Feb. 18, 2016.
4. M. Muro, A. Berube, and J. Whiton, “Black and Hispanic Underrepresentation in Tech: It’s Time to Change the Equation,” The Brookings Institution, March 28, 2018.
5. “About Us,” girlswhocode.com.
6. F. Dobbin and A. Kalev, “Why Diversity Programs Fail,” Harvard Business Review 94, no. 7/8 (July-August 2016).
7. R. Locascio, “Thousands of Sexist AI Bots Could Be Coming. Here’s How We Can Stop Them,” Fortune, May 10, 2018.
8. “Inclusive Design,” Microsoft.com.
9. T. Halloran, “How Atlassian Went From 10% Female Technical Graduates to 57% in Two Years,” Textio, Dec. 12, 2017.
10. C. DeBrusk, “The Risk of Machine-Learning Bias (and How to Prevent It),” MIT Sloan Management Review, March 26, 2018.
11. J. Zou and L. Schiebinger, “AI Can Be Sexist and Racist — It’s Time to Make It Fair,” Nature, July 12, 2018.
12. D. Bass and E. Huet, “Researchers Combat Gender and Racial Bias in Artificial Intelligence,” Bloomberg.com, Dec. 4, 2017.
13. B. Lovejoy, “Sexism Rules in Voice Assistant Genders, Show Studies, but Siri Stands Out,” 9to5Mac.com, Feb. 22, 2017.
14. J. Elliot, “Let’s Stop Talking to Sexist Bots: The Future of Voice for Brands,” Fast Company, March 7, 2018.
15. S. Paul, “Voice Is the Next Big Platform, Unless You Have an Accent,” Wired, March 20, 2017.