The Regulation of AI — Should Organizations Be Worried?

Reading Time: 6 min 

Topics

Column

Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers.
More in this series

What happens when injustices are propagated not by individuals or organizations but by a collection of machines? Lately, there’s been increased attention on the downsides of artificial intelligence and the harms it may produce in our society, from unequitable access to opportunities to the escalation of polarization in our communities. Not surprisingly, there’s been a corresponding rise in discussion around how to regulate AI. Do we need new laws and rules from governmental authorities to police companies and their conduct when designing and deploying AI into the world?

Part of the conversation arises from the fact that the public questions — and rightly so — the ethical restraints that organizations voluntarily choose to comply with. According to Edelman’s 2019 Trust Barometer global survey, only 56% of the general public has overall trust in the business community. A Gallup poll of four European countries in 2017 found that just 25% strongly agree that their own company “will always choose to do the right thing over an immediate profit or benefit.”

As businesses pour resources into designing the next generation of tools and products powered by AI, people are not inclined to assume that these companies will automatically step up to the ethical and legal responsibilities if these systems go awry.

The time when companies could simply ask the world to trust artificial intelligence and AI-powered products is long gone. Trust around AI requires fairness, transparency, and accountability. But even AI researchers can’t agree on a single definition of fairness: There’s always a question of who is in the affected groups and what metrics should be used to evaluate, for instance, the impact of bias within the algorithms.

Given that there are no clear-cut answers or solutions, even among experts, the conversation about regulations and standards is getting noisier.

The Rising Conversation Around Responsibility

While we have yet to reach certain conclusions around tech regulations, the last three years have seen a sharp increase in forums and channels to discuss governance. In the U.S., the Obama administration issued a report in 2016 on preparing for the future of artificial intelligence after holding public workshops examining AI, law, and governance; AI technology, safety, and control; and even the social and economic impacts of AI. The Institute of Electrical and Electronics Engineers (IEEE), an engineering, computing, and technology professional organization that establishes standards for maximizing the reliability of products, put together a crowdsourced global treatise on ethics of autonomous and intelligent systems. In the academic world, the MIT Media Lab and Harvard University established a $27 million initiative on ethics and governance of AI, Stanford is in the midst of a 100-year study of AI, and Carnegie Mellon University established a center to explore AI ethics.

Corporations are forming their own consortiums to join the conversation. The Partnership on AI to Benefit People and Society was founded by a group of AI researchers representing six of the world’s largest technology companies: Apple, Amazon, DeepMind/Google, Facebook, IBM, and Microsoft. It was established to frame best practices for AI, including constructs for fair, transparent, and accountable AI. It now has more than 80 partner companies.

Of course, individual companies are also weighing in on what kinds of ethical frameworks they will operate under. Microsoft president Brad Smith has written about the need for public regulation and corporate responsibility around facial recognition technology. Google established an AI ethics advisory council board (which it quickly canceled over controversy about some board members). Earlier this year, Amazon started a collaboration with the National Science Foundation to fund research to accelerate fairness in AI — although some immediately questioned the potential conflict of interest of having research funded by such a giant player in the field.

Public Concern Is Real

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.

AI has already shown itself very publicly to be capable of bad biases — which can lead to unfair decisions based on attributes that are protected by law. There can be bias in the data inputs, which can be poorly selected, outdated, or skewed in ways that embody our own historical societal prejudices. Most deployed AI systems do not yet embed methods to put data sets to a fairness test or otherwise compensate for problems in the raw material.

There also can be bias in the algorithms themselves and in what features they deem important (or not). For example, companies may vary their product prices based on information about shopping behaviors. If this information ends up being directly correlated to gender or race, then AI is making decisions that could result in a PR nightmare, not to mention legal trouble.

As these AI systems scale in use, they amplify any unfairness in them. The decisions these systems output, and which people then comply with, can eventually propagate to the point that biases become global truth.

Is Regulation Around the Corner?

So should organizations be worried about both the whispers as well as the actions supporting this desire to regulate AI? In short, yes.

The longer take is that although standards are not sexy stuff, they are critical for making AI not only more useful but also safe for consumer use. Given that AI is still young but quickly being integrated into every application that impacts our lives, there is no consensus on what standards should be established. In the U.S., that conversation is just getting started: In February 2019, the National Institute of Standards and Technology of the U.S. Department of Commerce was asked by executive order to create a plan for federal government engagement in creating standards for the use of AI technologies. In the European Union, initiatives such as the European AI Alliance, first set up in 2018, are establishing ethics guidelines for AI that are expected to lead to mid- and long-term policy recommendations on AI-related challenges and opportunities.

Of course, if standards are not established, agreed upon, and followed, regulations will undoubtedly follow. Recently, local governments in the U.S., tired of the wait, have taken their own initiative to regulate the use of AI. San Francisco became the first major U.S. city to ban the purchase and use of facial recognition software by police and other agencies. Other organizations, such as the American Civil Liberties Union, are using the court system. ACLU recently filed a brief to fight the inability for defendants to question the results derived from facial recognition software being used in criminal cases in jurisdictions such as Florida. In other locales, such as Germany, privacy and digital rights organizations have begun issuing statements strongly criticizing the government’s plan to deploy automated facial recognition in border surveillance activities.

Since organizations have not figured out how to stem the tide of “bad” AI, their next best step is to be a contributor to the conversation. Denying that bad AI exists or fleeing from the discussion isn’t going to make the problem go away. Identifying C-level executives who are willing to join in on the dialogue and finding individuals willing to help establish standards (who can also commit company resources to this cause) are the actions that organizations should be thinking about today.

When done correctly, AI can offer immeasurable good. It can provide educational interventions to maximize learning in underserved communities, improve health care based on its access to our personal data, and help people do their jobs better and more efficiently. Now is not the time to hinder progress. Instead, it’s the time for organizations to make a concerted effort to ensure that the design and deployment of AI are fair, transparent, and accountable for all stakeholders — and to be a part of shaping the coming standards and regulations that will make AI work for all.

Topics

Column

Our expert columnists offer opinion and analysis on important issues facing modern businesses and managers.
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Edward Eng
Good article. I do think AI will change the world very soon, including the digital business like Web Design and Mobile App Development.