To know that we know and that we do not know what we do not know, that id true knowledge. —Confucius
Philosophers and writers have long tried to raise awareness about the difficulty of balancing confidence with realism, yet the consequences of unsupportable confidence continue to plague businesses. Managers deal in opinions — they are bombarded with proposals, estimates, and predictions from people who sincerely believe them. But experience tells managers to suspect the certainty with which these beliefs are stated. For instance:
- A leading U.S. manufacturer, planning production capacity for a new factory, solicited a projected range of sales from its marketing staff The range turned out to be much too narrow and, consequently, the factory could not adjust to unexpected demand.
- A loan officer at a major commercial bank felt that his colleagues did not understand their changing competition as well as they thought they did and were refusing to notice signs of coming trouble.
- In the early 1970s, Royal Dutch/Shell grew concerned that its young geologists too confidently predicted the presence of oil or gas, costing the company millions of dry-well dollars.
- The sales head for Index Technology, a new software venture, repeatedly received unrealistic sales predictions, not only on amounts but also on how soon contracts would be signed.
Managers know that some opinions they receive from colleagues and subordinates will be accurate and others inaccurate, even when they are all sincerely held and persuasively argued. Moreover, given any strongly held opinion, one seldom has to look far to find an opposing view that is held no less firmly. We do not even have to favor a position now to reserve the right to hold a future position. One of us attended a faculty meeting at which a senior faculty member had been notably silent during a heated debate. When asked for his position, he replied, “I feel strongly about this; I just haven’t made my mind up which way.”
People are often unjustifiably certain of their beliefs. As a case in point, the manufacturer cited above accepted the staff’s confidently bracketed sales projections of twenty-three to thirty-five units per day and designed its highly automated factory to take advantage of that narrow range. Then, because of a worldwide recession, sales dropped well below twenty-three units per day. The plant was forced to operate far below its breakeven point and piled up enormous losses.
1. Linguists distinguish between language competence (the ability to produce coherent statements) and metalanguage (the ability to state the rules of the language). Such a clear distinction does not always exist between primary knowledge and metaknowledge. Early in the century, U.S. Weather Service forecasters simply predicted whether or not it would rain (a statement of their primary knowledge). Now they provide an explicit probability of rain, making uncertainty assessment an explicit part of their primary knowledge.
2. S. Lichtenstein, B. Fischhoff, and L.D. Phillips, “Calibration of Probabilities: The State of the Art to 1980,” in Judgment under Uncertainty: Heuristics and Biases, eds. D. Kahneman, P. Slovic, and A. Tversky (New York: Cambridge University Press, 1982), pp. 306–334.
3. G.N. Wright and L.D. Phillips, “Cultural Variations in Probabilistic Thinking: Alternative Ways of Dealing with Uncertainty,” International Journal of Psychology 15 (1980): 239–257.
4. All claims made about differences or trends are statistically significant at the .05 level or lower. The sample sizes for the percentages in Figure 1 range from a low of 122 when relevance = 1 to a high of 270 when relevance = 7, with the unrelated percentage based on all 1,440 unrelated questions.
5. J.E. Russo and P.J.H. Schoemaker, Decision Traps (New York: Simon and Schuster, 1990).
6. L.A. Tomassini et al., “Calibration of Auditors’ Probabilistic Judgments: Some Empirical Evidence,” Organizational Behavior and Human Performance 30 (1982): 391–406.
7. A.H. Murphy and R.L. Winkler, “Probability Forecasting in Meteorology,” Journal of the American Statistical Association 79 (1984): 489–500.
8. A. Tversky and D. Kahneman, “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 4 (1973): 207–232;
B. Fischhoff, P. Slovic, and S. Lichtenstein, “Fault Trees: Sensitivity of Estimated Failure Probabilities to Problem Representation,” Journal of Experimental Psychology: Human Perception and Performance 4 (1978): 330–344.
9. G. Keren, “Facing Uncertainty in the Game of Bridge: A Calibration Study,” Organizational Behavior and Human Decision Processes 39 (1987): 98–114.
10. P. Slovic and S. Lichtenstein, “Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment,” Organizational Behavior and Human Performance 6 (1971): 641–744;
A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185 (1974): 1124–1131.
11. J. Klayman and Y.W. Ha, “Confirmation, Disconfirmation, and Information in Hypothesis Testing,” Psychological Review 94, 2 (1987): 211–228.
12. D. Griffin and A. Tversky, “The Weighing of Evidence and the Determinants of Confidence” (Waterloo, Ontario: University of Waterloo, working paper, 1991).
13. P.J.H. Schoemaker, “Scenario Thinking” (Chicago: Graduate School of Business, University of Chicago, working paper, 1991).
14. For a review, see Lichtenstein, Fischhoff, and Phillips (1982).
15. J. Mahajan and J.C. Whitney, Jr., “Confidence Assessment and the Calibration of Probabilistic Judgments in Strategic Decision Making” (Tucson: University of Arizona, working paper series #12, 1987).
16. S.J. Hoch, “Availability and Inference in Predictive Judgment,” Journal of Experimental Psychology: Learning, Memory, and Cognition 10 (1984): 649–662.
17. Fischhoff, Slovic, and Lichtenstein (1978).
18.L. Dubé-Rioux and J.E. Russo, “An Availability Bias in Professional Judgment,” Journal of Behavioral Decision Making 1 (1988): 223–237. In this study, six of the twelve listed causes in a branch of a fault tree (see Figure 3) were removed. If people, in this case hospitality industry managers, were properly aware of all the major causes, then all of the probability of these six unlisted causes should show up in the last, “all other” category. In fact, very little did, strongly suggesting that what is out of sight is out of mind; i.e., the availability bias operates.
19. Fischhoff, Slovic, and Lichtenstein (1978).
20. Dubé-RiollX and Russo (1988).
21. P. Wack, “Scenarios: Uncharted Waters Ahead,” Harvard Business Review, September–October 1985, pp. 73–89;
P. Wack, “Scenarios: Shooting the Rapids,” Harvard Business Review, November–December 1985, pp. 139–150.
22. Schoemaker (1991).
23. M.A. Neale and M.H. Bazerman, “The Effects of Framing and Negotiator Overconfidence on Bargaining Behavior and Outcomes,” Academy of Management Journal 28 (1985): 34–49.
24. J.A. Sniezek and T. Buckley, “Level of Confidence Depends on Level of Aggregation,” Journal of Behavioral Decision Making 4 (1991): 263–272.
25. We wonder how many traffic fatalities are caused by alcohol-induced overconfidence. Certainly driving skills are impaired by alcohol, but this may be only part of the story. A more deadly aspect is that the drinker’s confidence is not reduced nearly as much as the ability itself This confidence gap between the skill levels drivers believe they possess and the reduced levels they actually have seems to be a primary problem with drunk drivers.
26. Despite a presumption that “two heads are better than one,” groups do not always make better decisions than individuals. The phenomenon known as groupthink is one serious problem. Whether groups are superior seems to depend on whether conflict is articulated or swept under the rug. See:
I.L. Janis, Groupthink, 2nd ed. (Boston: Houghton Mifflin, 1982.)
27. R.T. Clemen and R.L. Winkler, “Unanimity and Compromise among Probability Forecasters,” Management Science 36 (1990): 767–779;and
J.A. Sniezek and R.A. Henry, “Accuracy and Confidence in Group Judgment,” Organizational Behavior and Human Decision Processes 43 (1991): 1–28.