The Jobs That Artificial Intelligence Will Create

A global study finds several new categories of human jobs emerging, requiring skills and training that will take many companies by surprise.

Reading Time: 8 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF Download

The threat that automation will eliminate a broad swath of jobs across the world economy is now well established. As artificial intelligence (AI) systems become ever more sophisticated, another wave of job displacement will almost certainly occur.

It can be a distressing picture.

But here’s what we’ve been overlooking: Many new jobs will also be created — jobs that look nothing like those that exist today.

In Accenture PLC’s global study of more than 1,000 large companies already using or testing AI and machine-learning systems, we identified the emergence of entire categories of new, uniquely human jobs. These roles are not replacing old ones. They are novel, requiring skills and training that have no precedents. (Accenture’s study, “How Companies Are Reimagining Business Processes With IT,” will be published this summer.)

More specifically, our research reveals three new categories of AI-driven business and technology jobs. We label them trainers, explainers, and sustainers. Humans in these roles will complement the tasks performed by cognitive technology, ensuring that the work of machines is both effective and responsible — that it is fair, transparent, and auditable.

Trainers

This first category of new jobs will need human workers to teach AI systems how they should perform — and it is emerging rapidly. At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors.

Customer service chatbots, for example, need to be trained to detect the complexities and subtleties of human communication. Yahoo Inc. is trying to teach its language processing system that people do not always literally mean what they say. Thus far, Yahoo engineers have developed an algorithm that can detect sarcasm on social media and websites with an accuracy of at least 80%.

Consider, then, the job of “empathy trainer” — individuals who will teach AI systems to show compassion. The New York-based startup Kemoko Inc., d/b/a Koko, which sprung from the MIT Media Lab, has developed a machine-learning system that can help digital assistants such as Apple’s Siri and Amazon’s Alexa address people’s questions with sympathy and depth. Humans are now training the Koko algorithm to respond more empathetically to people who, for example, are frustrated that their luggage has been lost, that a product they’ve bought is defective, or that their cable service keeps going on the blink even after repeated attempts to fix it. The goal is for the system to be able to talk people through a problem or difficult situation using the appropriate amount of understanding, compassion, and maybe even humor. Whenever Koko responds inappropriately, a human trainer helps correct that action — and over time, the machine-learning algorithm gets better at determining the best response.

Without an empathy trainer, Alexa might respond to a user’s anxieties with canned, repetitive responses such as “I’m sorry to hear that” or “Sometimes talking to a friend can help.” With the right training, Alexa becomes much more helpful. The following is a verbatim transcription of how Alexa with Koko responds to a person who says he’s worried that he’ll fail an upcoming exam: “Exams are really stressful, but a little anxiety can help us succeed. It sharpens our minds. … It’s your body’s way to prepare itself for action. It’s actually a really good thing. I wonder if you could think of your nerves as your secret weapon. Easier said than done, I know. But I think you will do much better than you think.”

Explainers

The second category of new jobs — explainers — will bridge the gap between technologists and business leaders. Explainers will help provide clarity, which is becoming all the more important as AI systems’ opaqueness increases. Many executives are uneasy with the “black box” nature of sophisticated machine-learning algorithms, especially when the systems they power recommend actions that go against the grain of conventional wisdom. Indeed, governments have already been considering regulations in this area. For example, the European Union’s new General Data Protection Regulation, which is slated to take effect in 2018, will effectively create a “right to explanation,” allowing consumers to question and fight any decision made purely on an algorithmic basis that affects them.

Companies that deploy advanced AI systems will need a cadre of employees who can explain the inner workings of complex algorithms to nontechnical professionals. For example, algorithm forensics analysts would be responsible for holding any algorithm accountable for its results. When a system makes a mistake or when its decisions lead to unintended negative consequences, the forensics analyst would be expected to conduct an “autopsy” on the event to understand the causes of that behavior, allowing it to be corrected. Certain types of algorithms, like decision trees, are relatively straightforward to explain. Others, like machine-learning bots are more complicated. Nevertheless, the forensics analyst needs to have the proper training and skills to perform detailed autopsies and explain their results.

Here, techniques like Local Interpretable Model-Agnostic Explanations (LIME), which explains the underlying rationale and trustworthiness of a machine prediction, can be extremely useful. LIME doesn’t care about the actual AI algorithms used. In fact, it doesn’t need to know anything about the inner workings. To perform an autopsy of any result, it makes slight changes to the input variables and observes how they alter that decision. With that information, the algorithm forensics analyst can pinpoint the data that led to a particular result.

So, for instance, if an expert recruiting system has identified the best candidate for a research and development job, the analyst using LIME could identify the variables that led to that conclusion (such as education and deep expertise in a particular, narrow field) as well as the evidence against it (such as inexperience in working on collaborative teams). Using such techniques, the forensics analyst can explain why someone was hired or passed over for promotion. In other situations, the analyst can help demystify why an AI-driven manufacturing process was halted or why a marketing campaign targeted only a subset of consumers.

Sustainers

The final category of new jobs our research identified — sustainers — will help ensure that AI systems are operating as designed and that unintended consequences are addressed with the appropriate urgency. In our survey, we found that less than one-third of companies have a high degree of confidence in the fairness and auditability of their AI systems, and less than half have similar confidence in the safety of those systems. Clearly, those statistics indicate fundamental issues that need to be resolved for the continued usage of AI technologies, and that’s where sustainers will play a crucial role.

One of the most important functions will be the ethics compliance manager. Individuals in this role will act as a kind of watchdog and ombudsman for upholding norms of human values and morals — intervening if, for example, an AI system for credit approval was discriminating against people in certain professions or specific geographic areas. Other biases might be subtler — for example, a search algorithm that responds with images of only white women when someone queries “loving grandmother.” The ethics compliance manager could work with an algorithm forensics analyst to uncover the underlying reasons for such results and then implement the appropriate fixes.

In the future, AI may become more self-governing. Mark O. Riedl and Brent Harrison, researchers at the School of Interactive Computing at Georgia Institute of Technology, have developed an AI prototype named Quixote, which can learn about ethics by reading simple stories. According to Riedl and Harrison, the system is able to reverse engineer human values through stories about how humans interact with one another. Quixote has learned, for instance, why stealing is not a good idea and that striving for efficiency is fine except when it conflicts with other important considerations. But even given such innovations, human ethics compliance managers will play a critical role in monitoring and helping to ensure the proper operation of advanced systems.

The types of jobs we describe here are unprecedented and will be required at scale across industries. (For additional examples, see “Representative Roles Created by AI.”) This shift will put a huge amount of pressure on organizations’ training and development operations. It may also lead us to question many assumptions we have made about traditional educational requirements for professional roles.

Empathy trainers, for example, may not need a college degree. Individuals with a high school education and who are inherently empathetic (a characteristic that’s measurable) could be taught the necessary skills in an in-house training program. In fact, the effect of many of these new positions may be the rise of a “no-collar” workforce that slowly replaces traditional blue-collar jobs in manufacturing and other professions. But where and how these workers will be trained remain open questions. In our view, the answers need to begin with an organization’s own learning and development operations.

On the other hand, a number of new jobs — ethics compliance manager, for example — are likely to require advanced degrees and highly specialized skill sets. So, just as organizations must address the need to train one part of the workforce for emerging no-collar roles, they must reimagine their human resources processes to better attract, train, and retain highly educated professionals whose talents will be in very high demand. As with so many technology transformations, the challenges are often more human than technical.

This article was originally published on March 27, 2017. It has been updated to reflect edits made for its inclusion in our Summer 2017 print edition.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

Reprint #:

58416

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (13)
Donato Toppeta
EU has issued the Ethics Guidelines for Trustworthy Artificial Intelligence https://ec.europa.eu/futurium/en/ai-alliance-consultation it is a first step providing seven key requirements that AI systems should meet in order to be trustworthy:
Human agency and oversight
Technical robustness and safety
Privacy and Data governance
Transparency
Diversity, non-discrimination and fairness
Societal and environmental well-being
Accountability

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions as discussed in https://arxiv.org/abs/1902.03129
anshul sharma
I’ve been browsing online more than 4 hours today, yet I never found any interesting article like yours.It’s pretty worth enough for me.
 Thanks for this post! 
Sivakumar Balasubramanian
AI will always be inferior to all of us (the human brain). So, no need to be panic. More the creative one is better the chances for migrating to newer role. Never worry and we all should be fine.
Joseph Psotka
Sadly there arfe no numbers attached to these jobs, whereas we can easily contemplate that 7M mobility jobs, 14 M retail jobs, and 10 million service jobs will be gone within 15 years.  These three categories might have a million jobs combined.  How will that help the massive unemployment disruption AI may create?
Abhijit Bhattacharya
The article focuses on the exciting side of AI. Still, i think it is also important to discuss what are the chances of providing destructive training (or, bad ethics)  to the machines. The society has to figure out what kind of human skills should be developed to prevent AI from developing such destructive abilities.
Randy Crawford
These three roles assume a lot of stability in how AI is used within the enterprise.  It presumes the presence of commercial (third party) AI software that plays a deep role in the critical path of a company's operations.  

I know few industries or companies like that today, even where non-automated software implements a company's daily processes (other than software development).  Perhaps it best would portend companies whose practices have been mostly or fully automated, so subsequently all of the company's processes could be subsumed by an AI-driven application.  

But can we really foresee this state of affairs with any fidelity yet?  Don't we first need to see examples of companies which are implemented mostly via software, but done manually (with an eye to being automated and tuned by AI and machine learning)?
Munyaradzi Mushato
Indeed AI is the new normal or will be very soon and the early birds will certainly enjoy a first mover advantage. On the issue of new skills and behaviors to drive the AI age, its for industry and learning institutions, especially tertiary, to increase levels of research-based collaboration that should see an emergence of entirely new curriculae and qualifications set to meet the emerging skills demand in Industry. So, the AI wave can indeed wipe out professions, jobs and whole learning institutions who soon discover that their degrees have no takers anymore. In fact, innovation in non-AI technologies at industry level is already leading industry and commerce away from traditional knowledge and behaviors obtained in tertiary learning institutions. Ernest & Young has taken a lead in that phenomenon by announcing that they have scraped degrees from their minimum entry requirements after rightly observing the low prediction of degrees on job performance. http://www.huffingtonpost.co.uk/2016/01/07/ernst-and-young-removes-degree-classification-entry-criteria_n_7932590.htm
Munyaradzi Mushato
Indeed AI is the new normal or will be very soon and the early birds will certainly enjoy a first mover advantage. On the issue of new skills and behaviors to drive the AI age, its for industry and learning institutions, especially tertiary, to increase levels of research-based collaboration that should see an emergence of entirely new curriculae and qualifications set to meet the emerging skills demand in Industry. So, the AI wave can indeed wipe out professions, jobs and whole learning institutions who soon discover that their degrees have no takers anymore. In fact, innovation in non-AI technologies at industry level is already leading industry and commerce away from traditional knowledge and behaviors obtained in tertiary learning institutions. Ernest & Young has taken a lead in that phenomenon by announcing that they have scraped degrees from their minimum entry requirements after rightly observing the low  prediction of degrees on job performance.  folhttp://www.huffingtonpost.co.uk/2016/01/07/ernst-and-young-removes-degree-classification-entry-criteria_n_7932590.htmllow this link for more :
ANA MERCEDES GAUNA
Eu espero que isso jamais aconteça. As pessoas são bem mais importantes do que essa inteligência artificial, do que esses robots.  As pessoas tem que emprego e trabalho, as pessoas tem que sustentar o pai e a mãe (quando idosos), as pessoas tem que sustentar os filhos, tem que sustentar a esposa ou o marido, tem que sustentar a familia. Essas máquinas (inteligência artificial / robots) jamais devem ser criados para prejudicar o ser humano. Essas máquinas tem que ser criadas é para ajudar as pessoas (seres humanos).  Quem raciocina é o ser humano, não são essas máquinas (robots).
Translation (portuguese to english):
I hope this never happens. People are far more important than this artificial intelligence, than these robots. People have to work and work, people have to support the father and the mother (when they are elderly), the people have to support the children, they have to support the wife or the husband, they have to support the family. These machines (artificial intelligence / robots) should never be created to harm the human being. These machines have to be created is to help people (human beings). Those who reason is the human being, are not these machines (robots).
Kenneth Martin
Human behavior is complex and we are still learning more about it and the unknown hazards and risks associate with it. So, when combining two complex systems together such as AI and human behavior there will be many unknowns regarding the effects. If a system is complex then it cannot be measured. Please find link to article on the Benefits & Risks of Artificial Intelligence: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
Aik Koon Wee
Will the AI become their own trainers, explained etc?
Tinko Stoyanov
Please do not forget professions such as researchers and developers of AI-based systems, designers, maintaining personnel, etc. Those guys will be at the “genesis” of such systems. There are not many of them in the industry now. And the demand for such professionals will be skyrocketed soon.
Michael Zeldich
The robots will be not concurrents for workers if they will be belong to them.
That switch is requiring changes in model of industrial business.
The relation in industrial businesses should be remodeled after agricultural businesses.
That will convert all employees into self-employees responsible for all the business connected expenses and living on profit from trade of their activity result.
That remodelling will open a way for increasing productivity of economical system and help to return labor back.