Putting the Science in Management Science?
Andrew McAfee, research scientist at the Center for Digital Business in the MIT Sloan School of Management, says new IT capabilities will bring science to management decision-making.
Andrew McAfee, research scientist at MIT Sloan School of Management
It doesn’t matter whether your business is science-oriented, tech-oriented, media-oriented, people-oriented, or far-off-the-grid-oriented. If you’re not now using data and scientific analysis to back up intuition when making a decision, you will be.
That’s the message of Andrew McAfee, research scientist at the Center for Digital Business in the MIT Sloan School of Management. “It’s not that human intuition is bad,” he says. It’s not that intuition has been made irrelevant in a scientific world. But “if all you’re doing is relying on your intuition, in what a scientist would call close to a data vacuum, you are really leaving a huge opportunity on the table.”
As his bio notes on his blog, McAfee studies the ways that information technology “changes the way companies perform, organize themselves, and compete” and looks at “how computerization affects competition itself – the struggle among rivals for dominance and survival within an industry.”
In a conversation with MIT Sloan Management Review editor-in-chief Michael S. Hopkins, McAfee talks about how companies can adopt a more scientific mindset to decision-making, the lessons of Gary Kasparov’s losing battle against computer chess programs, and how technology’s incomprehensible rate of change is cause for optimism.
The Leading Question
How can companies bring a scientific discipline to their decision making?
Findings
- Companies need to consider how they blend intuition and data when making decisions.
- It’s a hard but valuable uphill battle to make a business more data reliant, to train staff to become more quantitatively oriented, and to understand what makes a scientifically-significant experiment and control group.
- The leader of the future will be able to balance to opposing viewpoints: able to impose the rigors of technology onto some situations, and able to get out of the way and allow creativity to emerge among staff.
- The rate of change will increase, and competitive companies will be able to propagate good ideas throughout their organizations more quickly than others.
You’ve said that technologically, we’re in a fast, weird place. Do you think we know what’s ahead for us?
We have a clue where we are, and we have a clue where we’re headed, but we don’t have more than a clue.
A couple things are pretty clear, though. We know that we are using technology to reach out to each other as people and to interconnect as people and to form human communities. This is the phenomenon that when I observe it inside corporations, I call it Enterprise 2.0. A lot of people call it Web 2.0.
I find this a fundamentally heartening thing because in the wake of all of this amazingly powerful technology, we’re not marginalizing people but putting them front and center in the middle of this great glue and letting them interconnect and share what’s inside their heads. We’re not trivializing what’s in their heads or trying to make it less important. I just find that grounds for great optimism.
To what degree do you think that executives are thinking about doing old things faster and better and more accurately, as opposed to using information technology to do old things in new ways??
It’s always easier to just think about doing what you’re doing now, faster, better, more automated, more productively. That’s not bad. It’s absolutely part of what organizations are doing and should be doing. The really hard work is understanding what new possibilities have opened up, and what important constraints are gone because of this cornucopia of technology that we’re sitting on.
Watch the video
Watch excerpts from editor-in-chief Michael Hopkin’s conversation with Andrew McAfee.
It’s actually a really subtle art, and I think a lot of the business innovation going forward is going to be people saying, “Wait a minute. We can approach this situation, this problem, this market opportunity, very differently than we have in the past.” One of the single biggest changes that I see coming is when you have this unbelievable amount of horsepower and a mass of data to apply it to, you can be a lot more scientific about things. You can be a lot more rigorous in your analysis. You can generate and test hypotheses. You can run experiments. You can adopt a much more scientific mindset.
I think if you don’t try to migrate your company and your decision-making in that direction, you’re missing out on a huge opportunity, and you had better hope your competition is also not moving in that direction. Because when you compare scientific to pre-scientific approaches, there’s one clear winner over and over.
Can you give an example of what you mean by this migration to a scientific approach?
I was at a meeting with most of the senior execs of a pretty big, pretty successful diversified company in the media industry, and we were talking about some of these same topics, what technology is doing to their industry and their world. And I asked, “When you all think about how you make the most important high-level decisions in this company, would you characterize your approach as data driven, analytical—”
And I didn’t get any further, because they all just broke down in laughter. One person said, “That’s just so far from the way we run this company now. We’re running it based on our intuition, based on our accumulated experience, based on what maybe some small study or some small amount of data is telling us. But basically, what I and my senior colleagues think is the right thing for us to be doing.”
It’s not that human intuition is bad, and again, it’s not that it’s marginalized or made irrelevant in a scientific world. It’s that it can be tested and advanced in a scientific world. And if all you’re doing is relying on your intuition, in what a scientist would call close to a data vacuum, you are really leaving a huge opportunity on the table.
So is their laughter self-deprecating? A realization that, “Oh, my god, we’re operating before Gutenberg here”?
Time will tell on those guys. Some companies have successfully walked away from intuition-based mindsets, to where they’re making more of their decisions by relying more heavily on what their data and what their experiments are telling them.
Other examples?
I heard this one just a while back. Gary Kasparov, the former world chess champion, wrote a fantastic article where he talked about the experience of being the world chess champion during the period when computers went from being trivially easy for him to beat and he played 32 simultaneous matches against computers. Beat every one of them, 32–0. No draws. No losses.
He wrote that by about 2004, the contests were uninteresting in the other way. He could no longer beat the computers. And in fact, by the end of his career, he couldn’t beat not only specialized computers like IBM’s Deep Blue; he couldn’t beat good chess programs running on commercially available servers anymore.
So his amazing intelligence, that massive base of intuition and pattern-matching that he built up, became inferior to what you get from a pretty cheap piece of technology. Now, you can look at that as story about human beings being taken over by machines, and there is some of that going on.
Where the story gets really interesting, and again, I get really optimistic, is he said they then ran a series of contests where they let any combination of people and computers play against each other. And the winning combination was fascinating. It was not the best chess players. It was not the fastest or the biggest horsepower computers. In fact, it was some good chess players, but not by any means the top in the world, playing with PCs. And it was this wonderful blend of human intuition and pattern-matching backed up by a lot of computing horsepower. Their process for figuring out the next move was superior to either the extraordinarily good chess players or the extraordinarily fast chess computers, even when they were able to collude with each other.
That’s a great story. Trace the business analogy. What’s the lesson from the chess example?
I think one of the single biggest challenges that organizations are going to face — and one of the biggest opportunities they face — is changing how they make important decisions. At many different levels of the organization. Frontline employees, middle-level managers, the top of the organization.
As I look around at domains as different as chess and medicine, I start to see a really intriguing pattern coming into view. And the pattern appears to be that if you have to make a choice between complete reliance on human intuition and turning things over to a computer to spit out an answer, you might want to turn things over to the computer.
As we talked about, computers are now better chess players than people. It turns out that if you build a pretty simple model of a lot of medical, clinical decisions, and then run a bunch of patients through that model, and run them by an experienced clinician, the model’s going to do a better job of diagnosing them and improving health outcomes.
Now, that seems like a dire comparison, right? We’re being marginalized by all these smart computers.
But I set up a false choice. You actually don’t have to choose exclusively between human intuition and push-the-button-and-run-with-what-answer-comes-out-of the-computer. You can blend the two. And the blend that I see coming into focus, and we’ve seen it in a lot of medical environments, is never taking the decision fully away from the person, never discounting to zero what they know and what their judgment is. But making sure that their decisions and their judgments are double-checked, if you like, by a computer that’s very thorough, that never, ever overlooks or forgets anything it has been programmed to do, and it has a large and expanding base of data underneath it.
So for example, doctors prescribe a lot of medications inside hospitals. They’re busy. They have a lot of patients to see. Things can get stressful, and it’s easy to overlook the fact that there’s a drug-drug interaction going on, or that this particular medication is contraindicated for this kind of patient. Honest mistake. Easy mistake. No slur at all on the intelligence or professionalism of that doctor, and that doctor still needs to be extraordinarily well trained.
When I’m in the hospital and I’m that patient, I really want that decision run through a computer that’s going to do all those drug-drug checks and every other kind of safety check, and is going to alert the doctor if they’re doing something that might harm me. I see no reason not to put that extra loop in the process.
We can translate that into all kinds of business environments. People get to make decisions; computers get to be part of it and to second guess and to let them know if it seems to be heading in the wrong direction.
Those examples seem to suggest that you could eliminate the person in the mix a lot of the time. That you’re designing a kind of redundancy. What is the human adding?
What the machine’s not going to be really good at, and we’ll stick with that medical example, is what a couple people have termed complex communication. In other words, do I want to type all my symptoms into some prescribing computer and receive a suite of medications that I’ve got to take? I really don’t want that. I want to talk to an experienced human clinician who knows to bring out what I’m experiencing, what the symptoms are, who can look at me and see what I’m telling him with my body language, and who’s eventually going to say, “Wait a minute. I think there’s actually something different going on here. Let me run a couple of these tests over here. It turns out my first guess about the problem wasn’t accurate, and there’s actually something else going on here.”
Over and over again, we do not see that the spark of human intuition becomes irrelevant or trivial.
Okay. How I think about decision-making throughout the organization is one of the characteristics that I want to cultivate. What else?
Well, as we talked about before, this transformation toward a more “scientific organization” is a long, slow, uphill battle. It’s uncomfortable in a lot of ways for people to be second-guessed, to be put into a process in tandem with a machine. It’s not easy to become more data reliant, to become more enumerate, to become more quantitatively oriented, to understand what an experiment is and what a control group is and what’s a significant difference. We’re not terribly well trained for it, most of us. And so instilling this philosophy inside an organization is a long, slow transition.
“It’s not easy to become more data reliant, to become more enumerate, to become more quantitatively oriented, to understand what an experiment is and what a control group is and what’s a significant difference. We’re not terribly well trained for it, most of us.”
—Andrew McAfee
Excellent point. People in business talk all the time about “experimenting.” But they don’t mean experimenting; they mean “trying stuff.”
Or they mean, “let’s design something that’s going to confirm what I really want to have happen here.” An eight-month process to spit out exactly the result that they want.
When we look at what real experimenting organizations do, they approach this in an open spirit of, “I don’t know what the answer is, and that’s what’s really exciting. I’m going to throw something out to see if this heads us in more the correct direction or the wrong one. Based on what I learn, I’m then going to do a subsequent trial.”
What else do we want to try to see in the future?
F. Scott Fitzgerald has a fantastic quote, I think in his book The Crack-Up. I’m going to mangle it, but he talks about how one of the characteristics of a first-rate mind is the ability to hold two opposite viewpoints at the same time and not get all messed up. That’s really becoming important in organizations today.
The two opposing viewpoints are, first of all, that in a lot of ways companies have the opportunity to become even more tightly orchestrated, regimented, regulated, via technology. We have all this amazing business-process technology that specifies with incredible detail what happens when, what the workflow is, who does what, what the roles and responsibilities are, what the decision rights are. While we can think about that as some kind of soulless destroying of the human spirit, it’s actually incredibly valuable. If I’m in charge of an organization, I want all of my vendors to get paid via a standardized, completely repeatable process that makes sure that they are going to get paid, that there’s no fraud, and that the potential for abuse is as low as possible.
At the same time, we can use technology to do exactly the opposite thing, which is essentially to get out of the way and watch what happens. Let people self-select into their roles into what they’re going to do, who they’re going to work with, what they want to share. Stop presupposing that we know what the right answer is and who should and shouldn’t be involved. What we see over and over again is that surprisingly good stuff emerges, and the bad stuff that happens is not worth worrying about.
This contrast between systematized and self-organizing regimentation — what terms do you like to use for it?
I use verbs to describe the difference between these two approaches. One of the verbs is “impose.” People at the center and the top of the organization get to impose throughout the rest of the organization their ideas for how work should be done. This is what the business process is. This is what the org structure is. This is what the roles and responsibilities are.
The other verb I use is “emerge,” which is basically get out of the imposition business all together and start watching what emerges, what people actually want to do and how they want to use technology to work with each other.
This is exactly the shift that happened during the history of Wikipedia. They started out trying to impose a workflow for developing encyclopedia articles. People stayed away in droves. It was only when the leaders of that organization got themselves out of the middle of the process, deployed some weird new technology, and watched what happened, that the Wikipedia we know emerged.
That’s fascinating. So, recapping what companies need to think about going forward: we talked about changing decision-making, instilling the scientific mindset, and thinking about the impose-and-emerge dichotomy. Anything else?
The clearest part of my crystal ball about the future is that it’s going to be a busier place than the world we live in today, in a business or a competitive sense. The rate of change is only going to continue to increase. We have the ability to get smarter more quickly than we used to. The question is, when you come up with a good idea, can you impose it very broadly across your organization with technology?
As consumers of that industry’s products, we should be thrilled that everything is faster. We’re going to get better and better stuff at an increasing rate. As a competitor, it’s a little bit less comfortable. We’re going to have to keep up with that pace and play that game. Sitting it out is a really horrible idea as a business strategy.
Another favorite quote of mine is from Norbert Wiener, who was this incredibly bright, weird guy who worked at MIT in the middle of the 20th Century. He said the world of the future is going to be an evermore demanding struggle against the limitations of our intelligence, not a comfortable hammock where we can lie down to be waited on by our robot slaves.
So our robots are getting fantastic, but they’re not going to make life calmer and easier for us as business people. They are going to push against the limits of our intelligence.
Is there anything people should be paying less attention to when it comes to technology?
The main redirection I would urge is turning away from relying on HiPPOs. I forget who coined the acronym, but it’s wonderful. It means Highly Paid Person’s Opinions. These are the business gurus of the world who have been around the longest and who are relying only on their business intuition.
Now, let me try to phrase the answer in a more positive spin, which is what are the things that we can encourage leaders to do in the short term? There are two things I’ve already mentioned: developing an analytic or scientific mindset, and thinking about how easy or difficult it is to propogate it.
Specifically, I ask, “What are some of the most important decisions that need to get made in your organization?” On a tactical level, on a day-to-day, repeatable level, and then also at a more periodic and a more strategic level. Just what are the important decisions in your organization? How are you making them right now? Would you characterize them as more intuition/experience based? Would you characterize them as rigorous and supported by data and experimentation? Just think about that. And can we imagine how to shift more of that decision-making into the realm of what I would call science?
Our colleagues at the Center for Information Systems Research, Peter Weill and Jeanne Ross, have done a ton of work on the digital infrastructure of big organizations, and the rule is fragmentation and the exception is consistency. What I mean is that most big organizations are fragmented, and that impedes the ability to propagate good ideas throughout the organization. For instance, a company might have eight different versions of the same ERP system – one in Latin America another in Brazil and different versions or software in Western Europe and North America. Most big organizations historically we have left those kinds of decisions in the hands of the regional managers. And it makes some sense to do that. However, when you have these kinds of technologies that can span the globe so easily, that redundancy is not only expensive, I think it actually impedes your business. You don’t have data that stays consistent. You don’t have workflows and business processes that are consistent. And if you come up with a better way of doing business, you can’t spread it as widely, as quickly, as faithfully as you can with an integrated infrastructure.
Last question. We’re going to ask you for a number. Imagine a scale where at 0 are people who use technology and do old things faster and more accurately. At the other end is a 10, where the organization has remade the entire way it works. Where do you think we are now?
In every decent-sized organization that I am familiar with, there are pockets that are at the high end of the spectrum you just identified. There are individual knowledge workers and managers doing hugely innovative stuff. The question is how many of those pockets are there? How much are they listened to?
They’re there, but they’re not making a sizeable dent yet in how the organization thinks about itself and how they’re conducting business.
In aggregate, where are we?
In aggregate? Oh my. We’re about 4.
You are an optimist. That’s higher than I might have guessed.
I think it’s that high for a couple of reasons. The Internet has been unignorable. Everyone has an Internet these days. Everyone has an internal communication platform. The ERP revolution, the enterprise software revolution, is unignorable for big organizations. So when you look at where they are in their ability to coordinate and orchestrate work versus where they were prior to the mid ’90s, it’s night and day different.
Those things make me think that we’re not back at 0, 1, 2 Land, and companies are getting more analytical about at least parts of their business. There’s always some smart people with big computers sitting in some part of the organization, turning out analyses.
Imagine doubling that 4 to an 8. Forget getting to 10. How long do you think it’s going to be before we’re twice as good?
We are drinking from a fire hose, and the fire hose is getting higher pressure over time. To get from 4 to 8, we are talking about a 10- to 20-year process. And that’s with a lot of heavy lifting and heavy thinking, and struggles against the limitations of our intelligence, by the people who run and design organizations.
This is such an interesting time. It’s not exaggeration to say that throughout human history, we have been very sharply constrained in our ability to communicate either one on one or to a broad audience, and we’ve been really sharply constrained in our ability to do calculations that are of interest to us. Now, there are few situations and few contexts where those constraints are still binding.
We are in this strange world where it’s essentially costless to communicate, where we have so much computational power that we don’t know what to do with it, more memory than we know what to do with, more storage than we know what to do with. We haven’t been here before. This is a new period in human history. And it’s really exciting to be here in the middle of it. When I think about the broad impact of technology, I think about all these computers and all of this networking here putting us in this strange, fantastic new world.