Every Leader’s Guide to the Ethics of AI

Until regulations catch up, AI-oriented companies must establish their own ethical frameworks.

Reading Time: 9 min 

Topics

As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.

Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.

Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.

Make AI Ethics a Board-Level Issue

Since an AI ethical mishap can have a significant impact on a company’s reputation and value, we contend that AI ethics is a board-level issue. For example, Equivant (formerly Northpointe), a company that produces software and machine learning-based solutions for courts, faced considerable public debate and criticism about whether its COMPAS systems for parole recommendations involved racially oriented algorithmic bias. Ideally, consideration of such issues would fall under a board committee with a technology or data focus. Unfortunately, these are relatively rare, in which case the entire board should be engaged.

Some companies have governance and advisory groups made up of senior cross-functional leaders to establish and oversee governance of AI applications or AI-enabled products, including their design, integration, and use. Farmers Insurance, for example, established two such boards — one for IT-related issues and the other for business concerns. Along with the board, governance groups such as these should be engaged in AI ethics discussions, and perhaps lead them as well.

A key output of such discussions among senior management should be an ethical framework for how to deal with AI. Some companies that are aggressively deploying AI, like Google, have developed and published such a framework.

Promote Fairness by Avoiding Bias in AI Applications

Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.

Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.

Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.

Most organizations should develop a set of risk management guidelines to help management teams reduce algorithmic bias within their AI or machine learning applications. They should address such issues as transparency and interpretability of modeling approaches, bias in the underlying data sets used for AI design and training, algorithm review before deployment, and actions to take when potential bias is detected. While many of these activities will be performed by data scientists, they will need guidance from senior managers and leaders in the organization.

Lean Toward Disclosure of AI Use

Some tech firms have been criticized for not revealing AI use to customers — even in prerelease product demos as with Google’s AI conversation tool Duplex, which now discloses that it is an automated service). Nontechnical companies can learn from their experience and take preventive steps to reassure customers and other external stakeholders.

A recommended ethical approach to AI usage is to disclose to customers or affected parties that it is being used and provide at least some information about how it works. Intelligent agents or chatbots should be identified as machines. Automated decision systems that affect customers — say, in terms of the price they are being charged or the promotions they are offered — should reveal that they are automated and list the key factors used in making decisions. Machine learning models, for example, can be accompanied by the key variables used to make a particular decision for a particular customer. Every customer should have the “right to an explanation” — not just those affected by the GDPR in Europe, which already requires it.

Also consider disclosing the types and sources of data used by the AI application. Consumers who are concerned about data misuse may be reassured by full disclosure, particularly if they perceive that the value they gain exceeds the potential cost of sharing their data.

While regulations requiring disclosure of data use are not yet widespread outside of Europe, we expect that requirements will expand, most likely affecting all industries. Forward-thinking companies will get out ahead of regulation and begin to disclose AI usage in situations that involve customers or other external stakeholders.

Tread Lightly on Privacy

AI technologies are increasingly finding their way into marketing and security systems, potentially raising privacy concerns. Some governments, for example, are using AI-based video surveillance technology to identify facial images in crowds and social events. Some tech companies have been criticized by their employees and external observers for contributing to such capabilities.

As nontech companies potentially increase their use of AI to personalize ads, websites, and marketing offers, it’s probably only a matter of time before these companies feel pushback from their customers and other stakeholders about privacy issues. As with other AI concerns, full disclosure of how data is being obtained and used could be the most effective antidote to privacy concerns. The pop-up messages saying “our website uses cookies,” a result of the GDPR legislation, could be a useful model for other data-oriented disclosures.

Financial services and other industries increasingly use AI to identify data breaches and fraud attempts. Substantial numbers of “false positive” results mean that some individuals — both customers and employees — may be unfairly accused of malfeasance. Companies employing these technologies should consider using human investigators to validate frauds or hacks before making accusations or turning suspects over to law enforcement. At least in the short run, AI used in this context may actually increase the need for human curators and investigators.

Help Alleviate Employee Anxiety

Over time, AI use will probably affect employee skill sets and jobs. In the 2018 Deloitte survey of AI-aware executives, 36% of respondents felt that job cuts from AI-driven automation rise to the level of an ethical risk. Some early concerns about massive unemployment from AI-driven automation have diminished, and now many observers believe that AI-driven unemployment is quite likely to be marginal over the next couple of decades. Given that AI supports particular tasks and not entire jobs, machines working alongside humans seems more probable than machines replacing humans. Nonetheless, many workers who fear job loss may be reluctant to embrace or explore AI.

An ethical approach is to advise employees of how AI may affect their jobs in the future, giving them time to acquire new skills or seek other employment. As some have suggested, the time for retraining is now. Bank of America, for example, determined that skills in helping customers with digital banking will probably be needed in the future, so it has developed a program to train some employees threatened by automation to help fill this need.

Recognize That AI Often Works Best With — Not Without — Humans

Humans working with machines are often more powerful than humans or machines working alone. In fact, many AI-related problems are the result of machines working without adequate human supervision or collaboration. Facebook, for example, has announced it will add 10,000 additional people to its content review, privacy, and security teams to augment AI capabilities in addressing challenges with “fake news,” data privacy, biased ad targeting, and difficulties in recognizing inappropriate images.

Today’s AI technologies cannot effectively perform some tasks without human intervention. Don’t eliminate existing, typically human, approaches to solving customer or employee problems. Instead — as the Swedish bank SEB did with its intelligent agent Aida — introduce new capabilities as “beta” or “trainee” offerings and encourage users to provide feedback on their experience. Over time, as AI capabilities improve, communications with users may become more confident.

See the Big Picture

Perhaps the most important AI ethical issue is to build AI systems that respect human dignity and autonomy, and reflect societal values. Google’s AI ethics framework, for example, begins with the statement that AI should “be socially beneficial”. Given the uncertainties and fast-changing technologies, it may be difficult to anticipate all the ways in which AI might impinge on people and society before implementation — although certainly companies should try to do so. Small-scale experiments may uncover negative outcomes before they occur on a broad scale. But when signs of harm appear, it’s important to acknowledge and act on emerging threats quickly.

Of course, many companies are still very early in their AI journeys, and relatively few have seriously addressed the ethics of AI use in their businesses. But as bias, privacy, and security issues become increasingly important to individuals, AI ethical risks will grow as an important business issue that deserves a board-level governance structure and process.

Topics

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.