Is Decision-Based Evidence Making Necessarily Bad?

Many managers think they’ve committed their organizations to evidence-based decision making — but have instead, without realizing it, committed to decision-based evidence making. Is that all bad? What can be done to fix it?

Reading Time: 16 min 

Topics

Permissions and PDF Download

Decision making is the essence of management, which explains why so much attention continues to be focused on how to do it better. In recent years, much has been written about evidence-based — or fact-based — decision making. The core idea is that decisions supported by hard facts and sound analysis are likely to be better than decisions made on the basis of instinct, folklore or informal anecdotal evidence. One need look no further than the shelves of the local bookstore to see an unprecedented collection of well-written titles extolling the virtues of data and analysis, such as Competing on Analytics (Davenport and Harris), Moneyball (Lewis) and Super Crunchers (Ayres). These books, like decision-making courses in business schools and the prescriptions of management consultants, focus on how to improve decision outcomes through improved process and technique. Many organizations have heeded the call and have invested heavily in data processing infrastructure and analytic tools, based on the assumption that better evidence-based decisions will follow naturally from these investments. While this focus on evidence is a welcome change from “thin slicing” or purely instinctive or intuitive snap judgments, these prescriptions tend to downplay the more fundamental questions: What is the relationship between evidence and the decision process that an organization actually uses? Why is evidence collected in the first place?

The Leading Question

Managers want ‘fact-based’ decisions. Are they getting them?

Findings
  • Evidence is not as frequent an input to decisions as suggested by the business press.
  • Not all decisions use evidence in the same way. Evidence can be used to make, inform or support a decision.
  • Managers need to be aware that evidence is shaped by subordinates to meet perceived expectations of company leaders.

Our research and consulting experience suggests that evidence is not as frequent an input to a decision process as suggested by the popular press. For example, we recently studied a major North American financial institution as it considered a proposal to change its enterprise e-mail platform from one technology to another. The organization had conducted two prior reviews of e-mail systems from major vendors and had twice recommended remaining with the existing supplier. However, the head of a small but influential and profitable division of the company advocated switching platforms in order to provide better integration with a specialized tool used only within his division. When asking staff to conduct the third major analysis, a director of the company’s information technology group recommended that the project manager produce a report that would support a change of vendor. A project team member told us, “The executives have already made up their minds…. We are being told that this is the way that we are going, we need to get on board — be team players — and make the decision work out to be [the new choice].”

Clearly, the ideal evidence-based decision process was subverted in this case by the perceived requirement to marshal facts and analysis to support a decision that had already been made elsewhere in the organization. We call this practice decision-based evidence making and argue that it is more widespread than many managers acknowledge. The purpose of this article is to examine the practice and to answer three fundamental questions:

  • Why does decision-based evidence making occur in organizations?
  • Is decision-based evidence making necessarily bad?
  • And, if decision-based evidence making is inevitable in organizations, what can be done to lessen its negative impacts?

Why Does Decision-Based Evidence Making Occur?

Managers use different approaches when making decisions, ranging from highly analytical and algorithmic to ad hoc and intuitive. Rather than converging on a single best approach, most normative models of decision making emphasize context and adaptability. The models suggest that a decision-making approach should be tailored to fit the particular characteristics of the decision problem. Thus, an algorithmic approach is well suited to highly structured decision problems in which the ends and means are well understood. Intuition and bargaining are more appropriate for poorly structured decision problems with multiple, conflicting ends and uncertain means.

The problem with the flexible, contextual approach is that the role of evidence is unclear. In some cases, hard evidence is critical in determining a decision outcome. In other cases, evidence is merely symbolic; it is used to lend legitimacy to the decision and signal the rationality of the decision makers. In this article, we impose structure on this loose continuum by identifying three distinct roles for evidence in decision-making practice, depending on whether it is being used to make, inform or support a decision. (See “The Role of Evidence in Decision Making.”)

Make a Decision

Evidence is used to make a decision whenever the decision follows directly from the evidence. For example, consider the choice of the optimal location of a new distribution facility. The objective is to minimize the cost of the facility, where cost is a function of several quantifiable factors such as route lengths, demand patterns, land availability and local tax incentives. Qualitative and noneconomic factors do not fit well into this mode of decision making and must be either ignored or transformed into quantitative evidence through “pricing out” or similar techniques. The objective facts regarding each of the decision alternatives (the potential facility locations) are then used as inputs into an optimization algorithm, and the location with the overall minimum cost is provided as output. The combination of data, a cost model and an optimization algorithm make the decision with minimal human intervention.

THE ROLE OF EVIDENCE IN DECISION MAKING

View Exhibit

The success of evidence-based decision making in highly structured environments such as location planning and supply chain management is universally acknowledged. The recent push toward evidence-based decision making in medicine suggests that even incomplete or provisional evidence (expressed as probabilities) can be valuable in less-structured, ambiguous decision environments. Indeed, while research shows that many managers have yet to adopt analytic approaches, head-to-head comparisons in which algorithmic, evidence-based techniques are evaluated against impressionistic and intuitive judgments of experts show that algorithmic techniques often provide better results, even in unstructured decision contexts such as granting parole or predicting job satisfaction.

The primary risk of making decisions by relying exclusively on hard evidence is that the algorithms and models used to transform evidence into a decision provide an incomplete or misleading representation of reality. There are many examples in finance, for example, where the models used by traders misspecified important real-world dependencies and risks. As the collapse of companies such as Long-Term Capital Management LP and The Bear Stearns Companies Inc. attests, the organizational downside created by a commitment to misspecified models can be significant.

Inform a Decision

Evidence is used to inform a decision whenever the decision process combines hard, objective facts with qualitative inputs, such as intuition or bargaining with other stakeholders. The role of evidence in informing decisions is thus akin to due diligence. For example, in a succession planning decision, objective evidence about the candidates’ past performance in managerial roles is often an important input to the decision process. However, subjective, impressionistic information is typically combined with hard evidence when making the final decision. Bargaining, expressions of power and other organizational elements that do not fit within the orthodox model of rational choice may also enter into such decisions.

The evidence-based inputs to the decision process either confirm or disconfirm the decision makers’ initial subjective beliefs and preferences. If the evidence is confirmatory, decision makers can move forward, confident that they have “the numbers” required to support their choice. However, a dilemma arises if the evidence disconfirms the initial subjective decision. The decision makers must either trust the evidence (in which case they have implicitly switched to the make mode described above) or side with their gut.

When asked, managers report that they routinely grant evidence priority over their impressionistic assessments. For example, in a survey of the capital budgeting practices of large American companies, 45% of respondents said they would reject a capital investment opportunity that had a favorable “strategic analysis” if the net present value (NPV) of the opportunity was negative. However, as illustrated in the example of the enterprise e-mail system, executives often provide analysts with subtle (and, in some cases, unsubtle) signals regarding the desired outcome of a formal, evidence-based analysis. Our research shows that senior decision makers are often unaware that evidence has been shaped by subordinates to conform to the perceptions of management. Top managers may receive little disconfirming evidence and, as a consequence, may underestimate the extent to which evidence is being trumped by intuition.

Support a Decision

Evidence is used to support a decision whenever the evidence is gathered or modified for the sole purpose of lending legitimacy to a decision that has already been made. This is distinct from making decisions without formal evidence. Organizations are routinely faced with complex, poorly structured decisions for which unambiguous evidence favoring a course of action is simply unavailable. To cite a classic example, Herman Miller Inc. discovered that focus groups disliked the novel aesthetics of its new Aeron chair. Management of the company ultimately chose to disregard the disconfirming evidence provided by the focus groups and launched the new chair anyway. The Aeron chair — which went on to become enormously successful — is not an example of decision-based evidence making because no evidence was manufactured to support bringing the chair to market. Instead, the culture of Herman Miller was such that it was possible to make a decision without or in spite of empirical evidence.

In other organizations, formal evidence is held in much higher regard and disconfirming evidence cannot simply be dismissed. Much depends on the cultural and formal norms of the organization and its external stakeholders. Decision-based evidence making is most prevalent whenever evidence is highly valued within the organization (that is, evidence is effectively “mandatory”) but a conflict exists between the evidence and the decision makers’ strongly held beliefs. Decision makers in these circumstances cannot simply override the disconfirming evidence (as in the case of the Aeron chair) but are instead more likely to rewrite the evidence so that it supports their beliefs.

A natural consequence of the push toward evidence-based decision making is that the norms requiring evidence are becoming increasingly explicit and rigid. For example, Harrah’s Entertainment Inc. CEO Gary Loveman requires that the effectiveness of new incentive programs at the company’s casinos be confirmed with small-scale experiments before a decision can be made to implement the programs companywide. This is not to suggest that the implicit pressure to manufacture ersatz evidence is a new phenomenon. Indeed, during Robert McNamara’s “whiz kids” era at the Ford Motor Company, interns reportedly cut up copies of The Wall Street Journal and inserted them into binders and boxes. This voluminous “evidence” was wheeled into the boardroom to demonstrate the depth of the analysis supporting their recommendations.

Is Decision-Based Evidence Making
Necessarily Bad?

The downside of organizational contempt for disconfirming evidence is clear. Consider Vince Kaminski’s frustrating tenure as the managing director for research at Enron Corp. prior to the company’s demise. Each deal undertaken by Enron was supposed to have a complete risk analysis conducted by Kaminski and Enron’s team of approximately 50 highly skilled mathematicians and analysts. However, as Enron became increasingly focused on deal volume and increasingly hostile to facts and analysis, the risk analysis became a charade.

The implosion of Enron has joined the Bay of Pigs invasion and the space shuttle Challenger disaster in the canon of cautionary tales warning decision makers of the perils of selective fact reading. But even if decision-based evidence making does not result in a high-profile fiasco, the practice sends powerful signals to others about organizational priorities, who has power and who does not, and the value the organization places on facts and analysis. In short, the way in which an organization’s senior leadership uses evidence during decision making matters deeply to the people responsible for creating the evidence.

We watched a few years ago as a large architectural and engineering company decided to replace its conventional computer monitors with smaller, sleeker LCD monitors. Although such a decision could have been justified in many different ways (such as improved aesthetics, reduced power consumption or simply employee satisfaction), the company chose to construct an elaborate formal cost-benefit analysis that included the fact that the new monitors required less physical space. However, as was pointed out by a senior manager involved in the process, these benefits were imaginary: After all, the firm would not actually recoup any of its real estate costs following the purchase of the flat monitors. Finally, after others began to question the legitimacy of the analysis, the managing director stopped the process and made the decision by fiat.

A habitual reliance on ceremonial decision processes and devalued evidence can have serious organizational side effects. Massive investments in information systems and analytical tools are wasted if the critical resources in the organization’s analytic capability are repeatedly demoralized and humiliated by having their efforts dismissed, overruled or altered. These analysts may begin to self-censure and, anticipating management expectations, present only confirming evidence. On the other hand, many decision environments exist where evidence (based on historical data) cannot possibly tell the whole story. Many important phenomena are inherently unpredictable, and an analysis of historical evidence is unlikely to provide much insight.

To illustrate, consider our experience with a mid-sized credit union that was contemplating a switch from its existing wealth management system to a new one developed by a startup company. A formal analysis of the two alternatives showed that they were similar in terms of projected ROI and other conventional financial measures. However, the head of wealth management preferred the system from the startup company because its modern architecture offered the promise of increased flexibility. The chief operating officer, in contrast, worried about the survivability of a startup and the risk of being stranded with an unsupported system.

Neither executive could find convincing support for their positions. On one hand, the head of wealth management did not know how his business would evolve over the next few years and was therefore unable to estimate the “option value” of increased flexibility offered by the new platform. On the other hand, software is often subject to direct and indirect increasing returns to adoption. As the COO pointed out, their incumbent platform was widely used within the industry. The stability provided by a large installed base was extremely important to the credit union — perhaps more important than the functionality of the software itself. Markets for such products tend to tip in favor of a single industry standard. However, as owners of Betamax and HD DVD players will attest, predicting the winner in such a standards war is extremely difficult.

Clearly, the leadership of the credit union could not rely on historical evidence and formal models; instead, they had to place bets on the future of their business based on intuition and rough consensus. However, both executives recognized the signal their actions would send to other organizations. Small, regional credit unions tend not to compete directly against each other and have a tradition of sharing both infrastructure and knowledge. The head of wealth management recognized that several other smaller credit unions were facing the same decision regarding their wealth management platforms and would be more likely to follow the larger credit union’s lead if they believed that the decision to switch was the result of careful, rational economic analysis. The executives reasoned that their decision to switch to a new platform would be used as evidence in the decision processes of other organizations. By signaling rigor and rationality in their own decision process, they could perhaps trigger an information cascade or herding effect. Herding by other companies would benefit the credit union by giving the startup provider a large enough installed base to obviate concerns about long-term survivability. The head of wealth management therefore prepared a comprehensive formal business case to support the credit union’s decision to switch platforms. The report was ultimately shared with other credit unions and, as predicted, several of these organizations decided to migrate to the new platform.

As this example illustrates, decision-based evidence making is not always a practice to be avoided. As the credit union executives recognized, evidence can be used not only to make or inform a decision but also to support a decision, to signal rationality and to instill others with confidence that a good decision was made. Whether such signals are effective and whether they ultimately help or hurt the organization depends critically on the nature of the decision itself and the intended audience for the manufactured evidence. As the example of the architectural and engineering company illustrates, decision-based evidence making that is directed at a well-informed internal audience is almost always perceived negatively. It typically undermines the legitimacy of the decision it was intended to support. In contrast, decision-based evidence can be effective when the audience is external and the manufactured evidence supports the organization’s best guesses about a complex and unpredictable decision environment.

What Can Be Done to Lessen the Negative Impact of Decision-Based Evidence Making?

One way for organizations to avoid negative decision-based evidence making is for decision makers to have a clear understanding of the different roles evidence can and should play in a decision process. This implies that decision makers within the organization have the flexibility to determine what constitutes legitimate justification of a particular decision. The difficulty that arises in practice is that organizations may have trouble differentiating between situations in which disconfirming evidence should be heeded and situations in which disconfirming evidence can safely be dismissed. Clearly, Enron was wrong to systematically ignore the formal analysis of its risk management team. However, the Aeron chair was an enormous commercial success, thereby vindicating Herman Miller’s decision to discount the negative reaction to the chair by focus groups.

Ultimately, the leadership of the organization must take responsibility for such judgments. To help, we provide the following guidelines:

  1. Understand the nature of the decision problem and assess the potential contribution of formal evidence to the quality of the decision process. There are many different types of problems — ranging from new product development to adoption of emerging technology standards — in which evidence based on historical data provides little insight. Decision makers should have the courage and the organizational support in such environments to make and justify a decision based on intuition, experience and consultation with others.
  2. Weigh the risks, costs and benefits of evidence when advocating an evidence-based approach to decision making. The costs should include not only the time required to collect evidence but any negative signals created by decision-based evidence making. For example, the decision by the architectural and engineering company to replace its conventional monitors with flat monitors was minor in comparison to the company’s capital and operating budgets. The incremental benefit of a formal business case was outweighed by the losses in prestige and legitimacy caused by management’s initial insistence on bogus evidence.
  3. Differentiate between internal and external audiences when engaging in decision-based evidence making. As noted above, there are situations in which evidence has significant ceremonial and signaling value. However, internal stakeholders (such as employees) typically have much better access to information than those outside of the organization. Consequently, internal audiences are seldom fooled by decision-based evidence making.
  4. Ensure that the objective evidence painstakingly gathered by your analysts is reflected more often than not in the decisions of the organization. If you must feed manufactured evidence to internal audiences, do so only rarely and sparingly. Enron provides an example of an organization in which a disregard for evidence and analysis became endemic.

There is mounting evidence in favor of evidence-based decision making in a wide range of organizational decision environments. The resistance of many managers to rational and analytical decision-making techniques is thus surprising. But what is troubling is that many managers who believe they have committed their organizations to evidence-based decision making (and have made hefty investments to back up this commitment) have committed instead to decision-based evidence making. Methodology alone cannot and should not replace managerial discretion or judgment. But, in much the same way that a streetlight can be used for illumination or support depending upon the need, greater understanding of the multiplicity of ways that evidence is used within organizations can lead to better decision making.

Topics

Reprint #:

51419

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (10)
DANIEL BOBKE
I think the title should be "Is EVIDENCE-Based DECISION Making Necessarily Bad?" - not "Is Decision-Based Evidence Making Necessarily Bad?".  Not sure what "evidence making" is - sounds like something attorneys do.
PHIL FRIEDRICH
Absolutely matches my experience.
Gerry La Londe-Berg
Is the "streetlight" reference at the end a wink to Gary Klien?  (Streetlights and Shadows).  Your thoughts are useful and not inconsistent with Mr. Klien's work.
Loretta Mahon Smith
"Decision making is the essence of management" is a great lead-in, and the article ends with a 4-point prescription for improving the process.

But the 5th point is missing.

5.  Transparency of evidence should always be included.  The data that is the basis for any decisions needs to be disclosed, and the quality and scope of that data should be shared with every audience.

The original audience of a report, or chart, or illustration may not be the most important reader!  Academics understand that citations are a critical component to successful works of research that are written.  Business needs to leverage this approach; every fact-based decision should have appropriate support easily available.

But in many cases these reports are delivered through application systems.  In the world of technology, you can find many best practices around Business Intelligence delivery, that will help provide guidance for the technology equivalent to citations in reporting (drill-down).  The DAMA Guide to the Data Management Body of Knowledge includes shared experience from more than 100 data management professionals worldwide.  
Charles H. Green
Excellent analysis of a pervasive and ill-understood issue: how decisions are made in business.

In my work with professional services and complex intangibles businesses, there is a strong inclination to believe that firms use evidence to make decisions (your case 1), both internally, and in the case of clients buying their services.  In truth, your case 3, using evidence to support a decision, is far more common. 

The use of evidence to support decisions is common enough in sales in general that author Jeffrey Gitomer has slimmed it down to, "People buy with their heart, and rationalize it with their brains."  Anyone remember, "No one ever got fired for hiring IBM?"

I don't disagree with your eminently sensible conclusion that "it depends" on the role of evidence as to what we should do about it.  However, I think in your zeal perhaps to be even-handed (or perhaps to instinctively make evidence more supportive of decisions), you under-estimate the power, frequency and validity of decision-based evidence making. 

I'd suggest that decision-based evidence-making is more frequent, more valid, more correct, more efficient, and all in all more effective than I think your article suggests.  We have systemically and massively come to believe that business both is, and ought to be, highly rational. It is not.  Human decision-making remains wildly mysterious, even while the idea persists that if we could only reduce it to better decision-trees, dispassionate analyses, and the like, things would be better. Meanwhile, more and more studies suggest the primacy of gut instinct over our more leisurely cognitive analytics. 

Leadership, buying decisions, managerial approaches: what we really value are not analytically skilled people, but those who operate from principles and who can quickly apply those principles to the issues at hand. 

Love your analysis; but the prescription you provide boils down to--a process of more analysis. I'm not sure we need it.
Mary Federico
Fascinating and illuminating article -- thank you!

I will say, however, that from the point of view of a consultant in organizational behavior, there is nothing at all surprising about "the resistance of many managers to rational and analytical decision-making techniques." 

Among the many reasons for this resistance are the mental mechanisms of the confirmation bias (the "believing is seeing" problem in which people neither seek out nor notice disconfirming information), and cognitive dissonance (whereby those confronted with evidence that disconfirms what they already "know" -- particulalry if they've stated it publicly -- tend to discount the new evidence rather than change their previous positions). 

For 10 years I have worked with a broad array of companies instituting large-scale process-improvement initiatives (e.g., Six Sigma, Lean) that are designed to increase the use of evidence-based decision making, and I've seen these mental mechanisms in play time and time again.
Satyabroto Banerji
Structured decision-making promotes transparency, fair-play, and HRD, all at the same time. The corporation for which I worked was revolutionized once we adopted the Kepner-Tregoe protocol.
Alana Cates
When conscious decisions only represent a tiny fraction of the decisions made, and human nature circumvents fact based decisions, making better decisions is a matter of attitude.
Great leaders keep cognitive dissonance, confirmation bias, and skewed perspectives in check with empathy, humility, disassociation, acceptance and curiosity.
Mahesh K Enjeti
One way of determining when to accept or reject disconfirming evidence could be to analyse instances where this has occurred in the past and see what learnings can be drawn from such experiences.  For instance, in the illustration where the chair became a success could it be that the research did not cover aspects that may have explained the reasons for its subsequent  popularity.  There seems to be a reluctance in organisations to re-visit both good and bad decisions once the urgency of the decision making is over.  There is little attempt to objectively, dispassionately and unhurriedly evaluate the choices made. Compiling more evidence on both evidence based decision making and decision based evidence making will turn the lamp post that serves as a prop into an illumination source that lights up the path of choice.
Walter P. Blass
Amen! Making up evidence for a decision that top management wants is indeed rife. My best example is the purchase in 1983 by AT&T of Philips' telecommunications division in Hilversum. My group in Strategic Planning was asked to look at the proposition, and ended up warning management that the deal had some serious risks, especially should the Dutch Guilder drop in value,or if ever we had to lay off people, given Dutch law requiring huge severance payments for long time employees. I was visited by a "messenger" from inside the Company and advised to go along since "it was well known that the Vice-Chairman favored the deal" and I would lose any chance for promotion if I did not go along. I balked and sure enough, years later we lost a bundle because the deal was consumated but our fears came true.