What Machines Can’t Do (Yet) in Real Work Settings

Across the vast range of real-world usage scenarios, there have been far more instances of augmentation of human work by smart machines than of full automation. That scenario is expected to continue for the foreseeable future.

Reading Time: 14 min 

Topics

Permissions and PDF

Almost 30 years ago, Bob Thomas, then an MIT professor, published a book called “What Machines Can’t Do.” He was focused on manufacturing technology and argued that it wasn’t yet ready to take over the factory from humans. While recent developments with artificial intelligence have raised the bar considerably since then for what machines can do, there are still many things that they can’t do yet or at least not do well in highly reliable ways.

AI systems may perform well in the research lab or under highly controlled application settings, but they still needed human help in the types of real-world work settings we researched for a new book, Working With AI: Real Stories of Human-Machine Collaboration. Human workers were very much in evidence across our 30 case studies.

In this article, we use those examples to illustrate our list of AI-enabled activities that still require human assistance. These are activities where organizations need to continue to invest in human capital and where practitioners can expect job continuity for the immediate future.

Current Limitations of AI in the Workplace

AI continues to gain capabilities over time, so the question of what machines can and can’t do in real-world work settings is a moving target. Perhaps the reader of this article in 2032 will find it quaintly mistaken about AI’s limitations. For the moment, however, it is important not to expect more of AI than it can deliver. Some of the important current limitations are described below.

Understanding context. AI doesn’t yet understand the broader context in which the business and the task to be performed are taking place. We saw this issue in multiple case studies. It is relevant, for instance, in a “digital life underwriter” job, in which an AI system assesses underwriting risk based on many data elements in an applicant’s medical records but without understanding the situation-specific context. One commonly prescribed drug, for example, reduces nausea for both cancer patients undergoing chemotherapy and pregnant women with morning sickness. As of yet, the machine can’t distinguish between these two situations when assessing the life insurance risk associated with this prescription.

We also saw instances where AI systems couldn’t know the context of the relationship between humans.

Topics

Reprint #:

64214

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comment (1)
Manoel Souza
Meu objetivo é somente aquisição de conhecimentos e não importa o "preço " que tenho que pagar.

[My objective is only to acquire knowledge and it doesn't matter what "price" I have to pay.]