MIT SMR Connections
MIT SMR Connections is the custom content creation unit within MIT Sloan Management Review.
For years, organizations have been using artificial intelligence to automate manual tasks and improve products and services. But as real-world use cases for AI multiply, so too do the ethical implications of simulating human intelligence in machines.
Today, AI-powered chatbots help to screen job candidates, facial recognition systems keep workplaces safe from intruders, and sophisticated AI algorithms predict market trends and emerging customer demands. These examples deliver benefits ranging from increased productivity and employee well-being to competitive gains. But they also raise important questions about the ways in which organizations operate in society and how AI systems can impact the privacy, fundamental rights, and safety of the people they’re intended to serve.
“Until a few years ago, AI was all about optimizing the accuracy on some training data,” says Lise Getoor, professor of computer science at the University of California Santa Cruz. But that’s changed, she says, as businesses place increasing emphasis on “the societal impacts” of what can happen when AI systems malfunction, are corrupted, or adopt human biases. These missteps can result in costly litigation, regulatory fines, lost revenue, reputational damage, and widespread mistrust in AI systems.
Fortunately, there are ways for organizations to capitalize on the competitive advantages of AI while fostering trust, eliminating bias, and embedding AI principles in the cultural fabric of an organization. This Strategy Guide aims to share expert advice on how AI can provide individual organizations with strategic advantage and have a net positive impact on society. It offers six best practices for achieving those goals, includes a checklist for building ethics into AI programs, and concludes with a thought leader’s advice on the issue. Download the guide to learn more.