Winning the Race With Ever-Smarter Machines

Rapid advances in information technology are yielding applications that can do anything from answering game show questions to driving cars. But to gain true leverage from these ever-improving technologies, companies need new processes and business models.

Reading Time: 25 min 

Topics

Permissions and PDF Download

In 2011, an IBM supercomputer called Watson beat human champions in the Jeopardy! game show.

Image courtesy of IBM.

In the past few years, progress in information technology — in computer hardware, software and networks — has been so rapid and so surprising that many present-day organizations, institutions, policies and mind-sets are not keeping up. We used to be pretty confident that we knew the relative strengths and weaknesses of computers vis-à-vis humans. But computers have started making inroads in some unexpected areas — and this has significant implications for managers and organizations.

A clear illustration of the dramatic increase in computing power comes from comparing a book published in 2004 with an announcement made in 2010. The book is The New Division of Labor by economists Frank Levy and Richard Murnane, and it’s a thoroughly researched description of the comparative capabilities of computers and human workers.

In its second chapter, titled “Why People Still Matter,” the authors present a spectrum of information-processing tasks. At one end are straightforward applications of existing rules. These tasks, such as performing arithmetic, can be easily automated, since computers are good at following rules.

At the other end of the complexity spectrum are pattern recognition tasks where the rules can’t be inferred. The New Division of Labor gives driving in traffic as an example of this type of task, and asserts that it is not automatable:

The … truck driver is processing a constant stream of [visual, aural and tactile] information from his environment. … [T]o program this behavior we could begin with a video camera and other sensors to capture the sensory input. But executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior. …

Articulating [human] knowledge and embedding it in software for all but highly structured situations are at present enormously difficult tasks. … Computers cannot easily substitute for humans in [jobs like truck driving].1

The results of the first DARPA Grand Challenge, held in 2004, supported Levy and Murnane’s conclusion. The challenge was to build a driverless vehicle that could navigate a 142-mile route through the Mojave Desert. The “winning” team made it less than eight miles before failing.2

Just six years later, however, real-world driving went from being an example of a task that couldn’t be automated to an example of one that was. In October 2010, Google announced on its official blog that it had modified six Toyota Priuses to the point that they were fully autonomous cars, ones that had driven more than 1,000 miles on U.S. roads without any human involvement at all, and more than 140,000 miles with only minor inputs from the person behind the wheel, along with data previously gathered by Google about the route. (To comply with driving laws, Google had a person behind the steering wheel at all times.)3

Levy and Murnane were correct that automatic driving on populated roads is an enormously difficult task, and it’s not easy to build a computer that can substitute for human perception and pattern matching in this domain. Not easy, but possible — and this challenge is being met.

The Google technologists are not taking shortcuts around the challenges listed by Levy and Murnane, but are meeting them head on. They used the staggering amounts of data collected for Google Maps and Google Street View as well as special sensors to provide as much information as possible about the roads on which their cars were traveling. In particular, their vehicles collect huge volumes of real-time data using video, radar and optical remote sensing (LIDAR) gear mounted on the car. These data are fed into software that takes into account the rules of the road; the presence, trajectory and likely identity of all objects in the vicinity; the driving conditions; and so on. This software controls the car and probably provides better awareness, vigilance and reaction times than any human driver could. So far, the Google vehicles’ only accident came when one was rear-ended by another car as it stopped at a traffic light.

Creating autonomous cars is not easy. But in a world of plentiful accurate data, powerful sensors and massive storage capacity and processing power, it is possible. This is the world we live in now. It’s one where computers improve so quickly that their capabilities pass from the realm of science fiction into the everyday world, not over the course of a human lifetime but in just a few years.

Levy and Murnane give complex communication as another example of a human capability that is very hard for machines to emulate.4 Complex communication entails conversing with a human being, especially in situations that are complicated, emotional or ambiguous. Evolution has “programmed” people to do this effortlessly, but it’s been very hard to program computers to do the same. For many of us, the breakthrough came when we started using Apple’s Siri personal assistant. Siri, an application that runs on the latest generation of iPhones, understands human speech well enough to answer a broad range of everyday requests, from “Where’s the nearest gas station?” to “Please make a lunch appointment with Sergey.” It does this by linking to a variety of public and private databases. Siri resolves ambiguous queries based on context and even has a bit of personality and humor that it uses when appropriate. As a result, many tasks on the iPhone, like making an appointment or finding a restaurant, are now easier to do by speaking natural language than by working through menus and typing text.

The Google driverless car shows how far and how fast digital pattern recognition abilities have advanced recently. Apple’s Siri shows how much progress has been made in computers’ ability to engage in complex communication. Another technology, one that was developed at IBM’s Watson Research Center and is named Watson, shows how powerful it can be to combine these two abilities and how far computers have advanced recently into territory thought to be uniquely human.

Watson is a supercomputer designed to play the game show Jeopardy!, in which contestants are asked questions on a wide variety of topics that are not known in advance.5 In many cases, these questions involve puns and other types of wordplay. It can be difficult to figure out precisely what is being asked or how an answer should be constructed. In short, playing Jeopardy! well requires the ability to engage in complex communication. The way Watson plays it, the game also requires massive amounts of pattern matching. The supercomputer has been loaded with hundreds of millions of unconnected digital documents, including encyclopedias and other reference works, newspaper stories and the Bible. When it receives a question, Watson immediately goes to work to figure out what is being asked (using algorithms that specialize in complex communication) and then starts querying all these documents to find and match patterns in search of the answer.

What comes out in the end is so fast and accurate that even the best human players can’t keep up. In February 2011, Watson played in a televised tournament against the two most accomplished contestants in the show’s history. After two rounds of the game shown over three days, the computer finished with more than three times as much money as its closest flesh-and-blood competitor. One of those competitors, Ken Jennings, acknowledged that digital technologies had taken over the game. Underneath his written response to the tournament’s last question, he added: “I for one welcome our new computer overlords.”6

Where did these overlords come from? How has science fiction become business reality so quickly? Two concepts are essential for understanding this remarkable progress. The first, Moore’s Law, is well-known: it is an expansion of an observation made by Gordon Moore, cofounder of microprocessor maker Intel. In a 1965 article in Electronics magazine, Moore noted that the number of transistors in a minimum-cost integrated circuit had been doubling every 12 months, and he predicted that this rate of improvement would continue into the future.7 When this proved to be the case, Moore’s Law was born.

Later modifications changed the time required for the doubling to occur; the most widely accepted period at present is 18 months.8 Variations of Moore’s Law have been applied to improvement over time in disk drive capacity, display resolution, network bandwidth and, most recently, energy consumption.9 In these and many other cases of digital improvement, doubling happens both quickly and reliably.

It also seems that software can progress at least as fast as hardware does, at least in some domains. Computer scientist Martin Grötschel analyzed the speed with which a standard optimization problem could be solved by computers during 1988-2003. He documented a 43-million-fold improvement, which he broke down into two factors: faster processors and better algorithms embedded in software. Processor speeds improved by a factor of 1,000, but those gains were dwarfed by the algorithms, which got 43,000 times better over the same period.10

The second concept relevant for understanding recent computing advances is closely related to Moore’s Law. It comes from an ancient story about math made relevant to the present age by the innovator and futurist Ray Kurzweil. In one version of the story, the inventor of the game of chess shows his creation to his country’s ruler. The emperor is so delighted by the game that he allows the inventor to name his own reward. The clever man asks for a quantity of rice, to be determined as follows: one grain of rice is placed on the first square of the chessboard, two grains on the second, four on the third, and so on, with each square receiving twice as many grains as the previous square.

The emperor agrees, thinking that this reward is too small. He soon sees, however, that the constant doubling results in tremendously large numbers. The inventor winds up with 264-1 grains of rice, or a pile bigger than Mount Everest. In some versions of the story, the emperor is so displeased at being outsmarted that he beheads the inventor.

In his 2000 book The Age of Spiritual Machines: When Computers Exceed Human Intelligence, Kurzweil notes that the pile of rice is not that exceptional on the first half of the chessboard:

After thirty-two squares, the emperor had given the inventor about 4 billion grains of rice. That’s a reasonable quantity — about one large field’s worth — and the emperor did start to take notice.

But the emperor could still remain an emperor. And the inventor could still retain his head. It was as they headed into the second half of the chessboard that at least one of them got into trouble.11

Kurzweil’s point is that constant doubling and other forms of exponential growth are deceptive because they’re initially unremarkable. Exponential increases initially look a lot like standard linear ones, but they’re not. As time goes by — as we move into the second half of the chessboard — exponential growth confounds our intuition and expectations. It accelerates far past linear growth, yielding Everest-sized piles of rice — and computers that can accomplish previously impossible tasks.

Google’s autonomous cars collect huge volumes of real-time data using video, radar and optical remote sensing (LIDAR) gear mounted on the cars.

Image courtesy of Google.

So where are we in the history of business use of computers? Are we in the second half of the chessboard yet? This is an impossible question to answer precisely, of course, but a simple, if whimsical, calculation yields an intriguing conclusion. U.S. government economic statistics added “information technology” as a category of business investment in 1958, so let’s use that as our starting year. And let’s take the standard 18 months as the Moore’s Law doubling period. Thirty-two doublings then take us to 2006 and to the second half of the chessboard. Advances like Google’s autonomous cars and Watson the Jeopardy! champion supercomputer, then, can be seen as the first examples of the kinds of digital innovations we’ll see as we move further into the second half — into the phase where exponential growth yields jaw-dropping results.

Technologies to Watch »

These results will be felt across virtually every task, job and industry. Economists Susanto Basu and John Fernald highlight how powerful, inexpensive information and communication technology allows departures from business as usual:

The availability of cheap ICT capital allows firms to deploy their other inputs in radically different and productivity-enhancing ways. In so doing, cheap computers and telecommunications equipment can foster an ever-expanding sequence of complementary inventions in industries using ICT.12

Note that computers increase productivity not only in the high-tech sector but also in all industries that purchase and use digital gear. And these days, that means all industries; even historically low-tech American sectors like agriculture and mining are now spending billions of dollars each year to digitize themselves.

Note also the choice of words by Basu and Fernald: Computers and networks bring an ever-expanding set of opportunities to companies. Digitization, in other words, is not a single project providing one-time benefits. Instead, it’s an ongoing process of creative destruction; innovators use both new and established technologies to make deep changes at the level of the task, the job, the process — and even the organization itself. These changes build and feed on each other, so that the possibilities offered really are constantly expanding. (This does not mean, however, that today’s rapid advances in computing are automatically beneficial for everyone; in fact, in our recent e-book Race Against the Machine, we argue that as digital technologies change rapidly, society faces a serious problem because millions of people are being left behind — facing either stagnant incomes or unemployment.)

Competing Using Machines

What does it mean for companies — and their workers, and the way work is organized — when machines can do a better job than humans at an increasing number of tasks? When considering this question, it’s helpful to remember that the idea of humans competing against machines is not new; in fact, it’s even part of American folklore. In the latter part of the nineteenth century, the legend of John Henry became popular as the effects of the steam-powered Industrial Revolution affected every industry and job that relied heavily on human strength. It’s the story of a contest between a steam-powered drill and John Henry, a muscular railroad worker and former slave, to see which of the two could bore the longer hole into solid rock.13 Henry wins this race against the machine, but loses his life; his exertions cause his heart to burst. Humans never directly challenged the steam drill again.

This legend reflected popular unease at the time about the potential for technology to make human labor obsolete. But this is not at all what happened as the Industrial Revolution progressed. As steam power advanced and spread throughout industry, more human workers were needed, not fewer. They were needed not so much for their raw physical strength (as John Henry was), but instead for other human skills: physical ones such as locomotion, dexterity, coordination and perception, and mental ones such as communication, pattern matching and creativity.

The John Henry legend shows us that, in many contexts, humans will eventually lose the head-to-head race against the machine. But the broader lesson of the Industrial Revolution is more like the Indianapolis 500 speedway race than John Henry: Over time, technological progress creates opportunities in which people race using machines. Humans and machines can then collaborate in a race to produce more, to capture markets and to beat other teams of humans and machines. This lesson remains valid and instructive today as machines are winning more types of head-to-head mental contests, not just physical ones. As with the Industrial Revolution, we believe things will get really interesting as more people start competing using these powerful new machines rather than competing against them.

The game of chess provides a great example. In 1997, Gary Kasparov, humanity’s most brilliant chess master, lost to Deep Blue, a $10 million specialized supercomputer programmed by a team from IBM. That was big news when it happened. However, what is less well-known is that the best chess players on the planet today are not computers. Nor are they humans. The best chess players are teams of humans using computers.

In matches pitting humans against humans, consulting a computer is considered cheating. Likewise, in computer chess competitions (yes, they also exist), human intervention is also cheating. However, “freestyle” competitions allow any combination of humans and computers. As Gary Kasparov himself notes of one such competition:

The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.14

The overall winner in that competition had neither the best human players nor the most powerful computers. Instead, Kasparov observed, it consisted of

a pair of amateur American chess players using three computers at the same time.

Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.15

This pattern is true not only in chess, but throughout the economy. In medicine, law, finance, retailing, manufacturing and even scientific discovery, the key to winning the race is not to race against machines, but to win using machines. While computers win at routine processing, repetitive arithmetic and error-free consistency and are quickly getting better at complex communication and pattern matching, computers have three failings. Computers lack intuition and creativity, they can be painfully fragile in uncertain or unpredictable environments, and they are lost when asked to work even a little outside a predefined domain. (See “Skills That Will Remain in Demand.”) Fortunately, humans are strongest exactly where computers are weak, creating a potentially beautiful partnership.

Skills That Will Remain in Demand »

Fostering Organizational Innovation

How can we implement winning “human + machine” strategies? The solution is organizational innovation: inventing new organizational structures, processes and business models that leverage ever-advancing technology and human skills. Such strategies require more than just automating existing jobs without really rethinking them. Simply substituting machines for human labor rarely adds much value or high returns; it results in only incremental productivity improvements. Instead, managers and entrepreneurs should think about developing new business models and processes that combine workers with ever more powerful technology to create value. Some companies are showing how to effectively race with machines — how to combine the relative strengths of people and digital technologies and achieve good business results. Here is a sampling of the smart ways companies are mixing human and machine capabilities:

1. Create processes that combine the speed of technology with human insight. Even though computers have made amazing recent progress in pattern recognition, they’re still not any good at figuring out things like what teenagers will want to wear next. However, many fashion retailers still try to plan their collections far in advance, using models based on past sales to predict future trends and demand. The Zara chain of clothing stores, which is part of the Spanish company Inditex, takes a very different approach. Instead of relying on algorithms to try to determine what will sell next, Zara relies on the abilities of its store managers around the world to discern emerging fashion trends in their communities and customer bases. These managers get an electronic form twice a week showing all available garments. They fill it out and send it back, and get the clothes they ordered within a couple of days. Store managers are also regularly consulted about the trends that they’re noticing so that the company can keep making the clothes young people feel they have to have. The combination of human insight and speedy technology makes Zara far more responsive and agile than its competitors.

2. Let humans be creative — and use technology to test their ideas. Another example of effectively combining the skills of people and technology is the way the office supply chain Staples used an application developed by Affinnova, a software and consulting company based in Waltham, Massachusetts, when determining the new packaging for its line of copy papers. People are much more creative than computers — but are too often bad at determining which of their ideas are any good, or how they should best be combined. So software now exists that quickly and accurately tests customer responses to different ideas and finds the optimal set. People came up with the elements of the Staples packaging — colors, slogans, logos and so on — and the software ran a Web-based survey to find the best mix.

3. Leverage IT to enable new forms of human collaboration and commerce. Machines are also providing radically new ways for people to work together to solve scientific problems. For instance, Foldit is a social game developed at the University of Washington that enlists hundreds of thousands of players who compete to fold and manipulate molecules. Recently, UW researchers credited these players with figuring out the structure of an AIDS-like virus.16

The Internet has also been a particularly rich breeding ground for new marketplaces and ecosystems that combine human and machine capabilities. For instance, it facilitates the operation of “micro-multinationals” — small businesses that work with customers, suppliers and partners globally to create and deliver value.17 What’s more, platforms like eBay, Apple’s iTunes, Google’s Android operating system and Amazon.com’s marketplace have spurred thousands of people to earn their livings by selling new, improved or simply unusual or cheaper products to a worldwide customer base. Technology manages most of the matching of buyers and sellers, the mechanics of the transactions and, in some cases, even marketing and pricing decisions. For digital goods, even distribution and delivery can be automated. In the iTunes and Android marketplaces, technology leverages creativity to make it possible to deliver a “long tail” of new niche products that otherwise would likely never reach a wide market.18

4. Use human insight to apply IT — and IT-generated data — to create more effective processes. Assurant Solutions, which sells credit insurance and debt protection products, already had an operationally optimized call center where callers were automatically routed to customer service reps with expertise in the product a customer was calling about. But when the company brought in mathematicians and actuaries to study the data the call center generated, they discovered that, for whatever reason, certain reps did much better with certain types of customers. By automatically routing calls to customer service reps more likely to develop a rapport with such customers, Assurant Solutions reported that the success rate of its call center almost tripled.

5. Once people develop new, improved processes, use IT to propagate those processes. More and more industries have a core of software. Technology thus makes it easier to replicate not just innovative digital products but also innovative business processes. For instance, when the drugstore chain CVS developed an improved prescription drug ordering process for its pharmacies, it embedded the process in an enterprise IT system. Because the process was tightly coupled with technology, CVS could assure that every clerk and pharmacist would adhere to the new process precisely as it had been designed, increasing overall customer satisfaction scores from 86 to 91. More important, CVS could rapidly propagate the innovation to over 4,000 physical locations. In effect, this one process innovation created a 4,000-fold economic impact quickly and accurately because it was embedded in easily replicated technology. This contrasts with the slow and error-prone paper-based or training-oriented procedures that were used for propagating processes a decade ago.19

Combinatorial Innovation

When businesses are based on bits instead of atoms, new innovations often add to the set of innovations available to the next entrepreneur, instead of depleting the stock of resources the way minerals or farmland could be depleted in the old economy. New businesses are often recombinations of previous ones. For example, an MIT student in one of our classes created a simple Facebook application for sharing photos. Although he had very little formal training in programming, he created a robust and professional-looking app in a few days using standard tools. Within a year, he had over 1 million users. This was possible because his application leveraged the Facebook user base, which in turn leveraged the broader World Wide Web, which in turn leveraged the Internet protocols, which in turn leveraged the cheap computers of Moore’s Law and many other innovations. He could not have created value for his million users without the existence of these prior inventions. Because the process of innovation often relies heavily on combining and recombining previous innovations, the broader and deeper the pool of accessible ideas and individuals, the more opportunities there are for innovation.

We are in no danger of running out of new combinations to try. Even if technology froze today, we have more possible ways to configure the different applications, machines, tasks and distribution channels to create new processes and products than we could ever exhaust.

Here’s a simple illustration: Suppose the people in a small company write down their work tasks — one task per card. If there were only 52 tasks in the company, as many as in a standard deck of cards, then there would be 52! different ways to arrange these tasks. (52! is shorthand for 52 × 51 × 50 ×… × 2 × 1, which multiplies to over 8.06 × 1067, about the number of atoms in our galaxy.) That is far, far more than the number of grains of rice on the second half of the chessboard, or even a second or third chessboard. Combinatorial explosion is one of the few mathematical functions that outgrows an exponential trend.

No central planner could imagine, let alone consider and evaluate, all the possible new products and processes latent in all the possible combinations of the building blocks that can be configured to create value in today’s digital economy. Most of the combinations may be no better than what we already have, but some surely will be, and a small fraction may be “home runs” that generate enormous value. Parallel experimentation by millions of entrepreneurs and innovative managers is the best and fastest way to identify the combinations of economic building blocks that will make a positive difference. Most of the digital economy’s potential for combinatorial innovation remains yet to be tapped.

Topics

References

1. F. Levy and R. Murnane, “The New Division of Labor: How Computers Are Creating the Next Job Market” (Princeton, New Jersey: Princeton University Press, 2004).

2. J. Hooper, “From DARPA Grand Challenge 2004: DARPA’s Debacle in the Desert,” June 4, 2004, www.popsci.com.

3. S. Thrun, “What We’re Driving At,” October 9, 2010, http://googleblog.blogspot.com.

4. Levy and Murnane, “The New Division of Labor.”

5. To be precise, Jeopardy! contestants are shown answers and must ask questions that would yield these answers.

6. J. Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not,” New York Times, Feb. 17, 2011.

7. G.E. Moore, “Cramming More Components Onto Integrated Circuits,” Electronics, April 19, 1965, 114-117.

8. M. Kanellos, “Moore’s Law to Roll on for Another Decade,” February 10, 2003, http://news.cnet.com.

9. K. Greene, “A New and Improved Moore’s Law,” Technology Review, September 12, 2011.

10. President’s Council of Advisors on Science and Technology, “Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology,” December 2010, www.whitehouse.gov.

11. R. Kurzweil, “The Age of Spiritual Machines: When Computers Exceed Human Intelligence” (New York: Penguin Books, 2000).

12. S. Basu and J. Fernald, “Information and Communications Technology as a General-Purpose Technology: Evidence From U.S Industry Data,” Working Paper Series 2006-29, Federal Reserve Bank of San Francisco, San Francisco, California, 2006.

13. Railroad construction crews in that period blasted tunnels though mountainsides by drilling holes into the rock, packing the holes with explosives and detonating them to lengthen the tunnel.

14. G. Kasparov, “The Chess Master and the Computer,” New York Review of Books, February 11, 2010.

15. Ibid.

16. F. Khatib, F. DiMaio, Foldit Contenders Group, Foldit Void Crushers Group, S. Cooper, M. Kazmierczyk, M. Gilski, S. Krzywda, H. Zabranska, I. Pichova, J. Thompson, Z. Popovi´c, M. Jaskolski and D. Baker, “Crystal Structure of a Monomeric Retroviral Protease Solved by Protein Folding Game Players,” Nature Structural & Molecular Biology 18 (2011): 1175-1177.

17. M.V. Copeland, “The Mighty Micro-Multinational,” Business 2.0, July 1, 2006.

18. M.S. Hopkins and L. Brokaw, “Matchmaking With Math: How Analytics Beats Intuition to Win Customers,” MIT Sloan Management Review 52, no. 2 (winter 2011): 35-41.

19. A. McAfee and E. Brynjolfsson, “Investing in the IT That Makes a Competitive Difference,” Harvard Business Review 86 (July-August 2008): 98-107.

i. S. Lohr, “For Today’s Graduate, Just One Word: Statistics,” New York Times, Aug. 5, 2009.

 

Reprint #:

53208

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (6)
Using Artificial Intelligence to Set Information Free by Reid Hoffman | Inteligência Competitiva por Alfredo Passos
[…] already seen the power of specialized AI in the form of IBM’s Watson, which trounced the best human players at “Jeopardy,” and Google […]
Tech–No Jobs? | My Blog
[…] and McAfee state how “the key to winning the race is not to compete against machines but to compete with machines”, Hence using technology as a tool to aid us to progress, rather than to use it to replace the human […]
Ros Earl
Please explain what "Leveraging" means. Doesn't make sense in English
Where We’re at in the Race Against the Machines | Enjoying The Moment
[…] an MIT Sloan Management Review article based on their book, Brynjolfsson and McAfee relate a parable to help us understand recent […]
Where We’re at in the Race Against the Machines | My Web Marketing Planner Blog
[…] an MIT Sloan Management Review article based on their book, Brynjolfsson and McAfee relate a parable to help us understand recent […]
Doug Laney
As impressive as Watson was at sussing Jeopardy! questions..er..answers, it was apparent to all Ken Jenning's frustration at not being able to buzz-in. Further investigation uncovered that Watson was hard-wired to the "signal ready" indicator, giving it a nanosecond-scale response time compared to a human 300-400 millisecond response time. Also, "answer" text was fed directly to Watson rather than it having to visually/aurally process as the contestants did. Smarter? Perhaps. Given a both a head-start and a jump on the buzzer? Definitely.