How soon will it be before smart machines perform complex, multifaceted services such as looking out for our health?
Every day, we hear about smart machines with new capabilities: computers that can outplay chess masters or are capable of processing natural language to answer increasingly complex questions; new cars that alert us when the driver in front of us hits the brakes, when we drift out of our designated lanes, or when a pedestrian suddenly steps off the curb. But how soon will it be before smart machines perform complex, multifaceted services such as looking out for our health?
In a recent article in The New Yorker, “A.I. Versus M.D.,” Siddhartha Mukherjee, a hematologist and oncologist at Columbia University Medical Center, describes the increasingly nuanced role computers are playing in cancer screening. Twenty years ago, Mukherjee notes, computers were used by diagnosticians to help identify suspicious patterns or waveforms and, later, to confirm a hypothesis. However, he writes, while the rate of biopsies increased, detections didn’t, and there was a jump in false positives.
More recent intelligent systems have used a computing strategy modeled after the brain, known as a “neural network,” which can “learn” how to diagnose illnesses. Mukherjee describes a 2015 study by Sebastian Thrun of Stanford University in which a smart machine was asked to classify 14,000 images that dermatologists had found to have abnormalities (either benign or cancerous). The system correctly diagnosed the problems 72% of the time, compared with 66% for two board-certified dermatologists. Then, in a related study, 21 dermatologists were asked to review a set of about 2,000 images for skin cancers. In all but a few cases, the machine did better at spotting cases of melanoma than the doctors did; what’s more, for reasons that aren’t clear, it learned to differentiate moles from cancers.
Applying similar capabilities to detect other illness early and accurately may not be far away. By monitoring a person’s speech patterns with a cellphone, for example, it may be possible to detect early signs of Alzheimer’s disease. Steering wheels with sensors to detect hesitations and tremors might identify potential cases of Parkinson’s disease. Similarly, researchers say, algorithms tracking patients’ heartbeats may identify cardiac issues before they show up in other ways. Patients concerned about skin lesions will be able to send images from their iPhones to robots, which over time will become more and more skilled at diagnosis.
So, what will this mean for specialists such as dermatologists or radiologists? At a recent conference on AI and machine learning at MIT’s Initiative on the Digital Economy in Cambridge, Massachusetts, 69% of attendees said they expected most medical images to be interpreted “primarily by machines” by 2020, with more than 95% expecting this to occur by 2030. Some 14% of the attendees said they expected most surgeries to be performed by machines by 2020, with 54% saying it would happen by 2030. (See “Expectations for Smart Machines in Medicine.”)
Mukherjee doesn’t think skilled medical specialists are at risk of being replaced by smart machines. For one thing, doctors (at least those with a good “bedside manner”) can provide a degree of explanation and interaction that algorithms will never be capable of offering. In the near term, at least, machines are likely to augment human capabilities. Yet the increasing capacity of learning machines poses a number of questions that apply not only to medicine but other fields as well. “As machines learn more and more,” Mukherjee wonders, “will humans learn less and less?” And who will train the technicians?