In coming years, the most intelligent organizations will need to blend technology-enabled insights with a sophisticated understanding of human judgment, reasoning, and choice. Those that do this successfully will have an advantage over their rivals.

To succeed in the long run, businesses need to create and leverage some kind of sustainable competitive edge. This advantage can still derive from such traditional sources as scale-driven lower cost, proprietary intellectual property, highly motivated employees, or farsighted strategic leaders. But in the knowledge economy, strategic advantages will increasingly depend on a shared capacity to make superior judgments and choices.

Intelligent enterprises today are being shaped by two distinct forces. The first is the growing power of computers and big data, which provide the foundation for operations research, forecasting models, and artificial intelligence (AI). The second is our growing understanding of human judgment, reasoning, and choice. Decades of research has yielded deep insights into what humans do well or poorly.1 (See “About the Research.”)

In this article, we will examine how managers can combine human intelligence with technology-enabled insights to make smarter choices in the face of uncertainty and complexity. Integrating the two streams of knowledge is not easy, but once management teams learn how to blend them, the advantages can be substantial. A company that can make the right decision three times out of five as opposed to 2.8 out of five can gain an upper hand over its competitors. Although this performance gap may seem trivial, small differences can lead to big statistical advantages over time. In tennis, for example, if a player has a 55% versus 45% edge on winning points throughout the match, he or she will have a greater than 90% chance of winning the best of three sets.2

References

1. Two classic research anthologies are D. Kahneman, P. Slovic, and A. Tversky, eds., “Judgment Under Uncertainty: Heuristics and Biases” (Cambridge, United Kingdom: Cambridge University Press, 1982); and D. Kahneman and A.Tversky, eds., “Choices, Values, and Frames” (Cambridge, United Kingdom: Cambridge University Press, 2000). See also W.M. Goldstein and R.M. Hogarth, eds., “Research on Judgment and Decision Making: Currents, Connections, and Controversies” (Cambridge, United Kingdom: Cambridge University Press, 1997); D.J. Koehler and N. Harvey, eds., “Blackwell Handbook of Judgment and Decision Making” (Malden, Massachusetts: Blackwell Publishing, 2004); and D. Kahneman, “Thinking: Fast and Slow” (New York: Farrar, Straus, and Giroux, 2011).

2. Readers can examine different probabilities of winning in tennis at “Tennis Calculator,” 2015, www.mfbennett.com. For analytical derivations, see F.J.G.M. Klaassen and J.R. Magnus, “Forecasting the Winner of a Tennis Match,” European Journal of Operational Research 148, no. 2 (2003): 257-267.

3. E. Siegel, “Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die” (Hoboken, New Jersey: John Wiley & Sons, 2013); and T.H. Davenport and J.G. Harris, “Competing on Analytics: The New Science of Winning” (Boston: Harvard Business Review Press, 2007).

4. K. Popper, “Of Clocks and Clouds,” in “Learning, Development, and Culture: Essays in Evolutionary Epistemology,” ed. H.C. Plotkin (Hoboken, New Jersey: John Wiley & Sons, 1982), 109-119.

5. Notable books in this regard are J. Baron, “Thinking and Deciding,” 3rd ed. (Cambridge, United Kingdom: Cambridge University Press, 2000); J.E. Russo and P.J.H. Schoemaker, “Winning Decisions: Getting It Right the First Time” (New York: Doubleday 2001); G. Gigerenzer and R. Selten, eds., “Bounded Rationality: The Adaptive Toolbox” (Cambridge, Massachusetts: MIT Press, 2002); D. Ariely, “Predictably Irrational: The Hidden Forces That Shape Our Decisions” (New York: HarperCollins, 2008); and M. Lewis, “The Undoing Project” (New York: W.W. Norton, 2016).

6. P.E. Tetlock and D. Gardner, “Superforecasting: The Art and Science of Prediction” (New York: Crown, 2015).

7. P.J.H. Schoemaker and P.E. Tetlock, “Superforecasting: How to Upgrade Your Company’s Judgment,” Harvard Business Review 94, no. 5 (May 2016): 72-78.

8. For more details about best practices for setting up and running prediction tournaments, see Schoemaker and Tetlock, “Superforecasting.”

9. Prediction tournaments are scored using a rigorous, widely accepted yardstick known as the Brier score. For more information about the Brier score, see G.W. Brier, “Verification of Forecasts Expressed in Terms of Probability,” Monthly Weather Review 78, no. 1 (January 1950): 1-3.

10. B. Fischhoff, “Debiasing,” in “Judgment Under Uncertainty,” ed. Kahneman, Slovic, and Tversky, 422-444; and J.S. Lerner and P.E. Tetlock, “Accounting for the Effects of Accountability,” Psychological Bulletin 125, no. 2 (March 1999): 255-275.

11. B. Fischhoff, “Debiasing;” G. Keren, “Cognitive Aids and Debiasing Methods: Can Cognitive Pills Cure Cognitive Ills?,” Advances in Psychology 68 (1990): 523-552; and H.R Arkes, “Costs and Benefits of Judgment Errors: Implications for Debiasing,” Psychological Bulletin 110, no. 3 (November 1991): 486-498.

12. The term “bootstrapping” has a different meaning in statistics, where it refers to repeated sampling from the same data set (with replacement) to get better estimates; see, for example, “Bootstrapping (Statistics),” Jan. 26, 2017, https://en.wikipedia.org.

13. H.A. Wallace, “What Is in the Corn Judge’s Mind?,” Journal of American Society for Agronomy 15 (July 1923): 300-304.

14. S. Rose, “Improving Credit Evaluation,” American Banker, March 13, 1990.

15. These tasks included, among others, predicting repayment of medical students’ loans. See R. Cooter and J.B. Erdmann, “A Model for Predicting HEAL Repayment Patterns and Its Implications for Medical Student Finance,” Academic Medicine 70, no. 12 (December 1995): 1134-1137. For more detail on how to build linear models — both objective and subjective — see A.H. Ashton, R.H. Ashton, and M.N. Davis, “White-Collar Robotics: Levering Managerial Decision Making,” California Management Review 37, no. 1 (fall 1994): 83-109. Especially useful is their discussion of possible objections to using linear models in applied settings, as in their example of predicting advertising space for Time magazine.

16. For a thorough analysis of the multiple reasons for this paradox, see C.F. Camerer and E.J. Johnson, “The Process-Performance Paradox in Expert Judgment: How Can Experts Know So Much and Predict So Badly?,” chap. 10 in “Research on Judgment and Decision Making,” ed. Goldstein and Hogarth.

17. Random noise can produce much inconsistency within as well as across experts; see R.H. Ashton, “Cue Utilization and Expert Judgments: A Comparison of Independent Auditors With Other Judges,” Journal of Applied Psychology 59, no. 4 (August 1974): 437-444; J. Shanteau, D.J. Weiss, R.P. Thomas, and J.C. Pounds, “Performance-Based Assessment of Expertise: How to Decide if Someone Is an Expert or Not,” European Journal of Operational Research 136, no. 2 (January 2002): 253-263; R.H. Ashton, “A Review and Analysis of Research on the Test-Retest Reliability of Professional Judgment,” Journal of Behavioral Decision Making 13, no. 3 (July/September 2000): 277-294; S. Grimstad and M. Jørgensen, “Inconsistency of Expert Judgment-Based Estimates of Software Development Effort,” Journal of Systems and Software 80, no. 11 (November 2007): 1770-1777; and A. Koriat, “Subjective Confidence in Perceptual Judgments: A Test of the Self-Consistency Model,” Journal of Experimental Psychology: General 140, no. 1 (February 2011): 117-139.

18. Beyond just predictions, noise reduction is a broad strategy for improving decisions; see D. Kahneman, A.M. Rosenfield, L. Gandhi, and T. Blaser, “Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making,” Harvard Business Review 94, no. 10 (October 2016): 38-46.

19. The radiologist example was taken from P.J. Hoffman, P. Slovic, and L.G. Rorer, “An Analysis-of-Variance Model for Assessment of Configural Cue Utilization in Clinical Judgment,” Psychological Bulletin 69, no. 5 (May 1968): 338-349. Note that these were highly trained professionals making judgments central to their work. In addition, they knew that their medical judgments were being examined by researchers, so they probably tried as hard as they could. Still, their carefully considered judgments were remarkably inconsistent.

20. The average intra-expert correlation was .76, which equates to a 23% chance of getting a reversal in the ranking or scores of two cases from one time to the next. In general, a Pearson product-moment correlation of r translates into a [.5+arcsin (r)/π] probability of a rank reversal of two cases the second time, assuming bivariate normal distributions; see M. Kendall, “Rank Correlation Methods” (London: Charles Griffen & Co., 1948).

21. A provocative brief for this structured numerical approach in medicine can be found in J.A. Swets, R.M. Dawes, and J. Monahan, “Better Decisions Through Science,” Scientific American, October 2000, 82-87.

22. For a general review of bootstrapping performance, see C. Camerer, “General Conditions for the Success of Bootstrapping Models,” Organizational Behavior and Human Performance 27, no. 3 (1981): 411-422, which builds on and refines the classic paper by K.R. Hammond, C.J. Hursch, and F.J. Todd, “Analyzing the Components of Clinical Inference,” Psychological Review 71, no. 6 (November 1964): 438-456.

23. G. Klein, “The Power of Intuition” (New York: Currency-Doubleday, 2004); and R.M. Hogarth, “Educating Intuition” (Chicago: University of Chicago Press, 2001). See also D. Kahneman and G. Klein, “Conditions for Intuitive Expertise: A Failure to Disagree,” American Psychologist 64, no. 6 (September 2009): 515-526.

24. P. Goodwin, “Integrating Management Judgment and Statistical Methods to Improve Short-Term Forecasts,” Omega 30, no. 2 (April 2002): 127- 135; for medical examples, see J. Reason, “Human Error: Models and Management,” Western Journal of Medicine 172, no. 6 (June 2000): 393-396; and B.J. Dietvorst, J.P. Simmons, and C. Massey, “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” Journal of Experimental Psychology: General 144, no. 1 (February 2015): 114-126.

25. R.C. Blattberg and S.J. Hoch, “Database Models and Managerial Intuition: 50% Model + 50% Manager,” Management Science 36, no. 8 (August 1990): 887-899.

26. Related cognitive processes involve associative networks, scripts, schemata, frames, and mental models; see J. Klayman and P.J.H. Schoemaker, “Thinking About the Future: A Cognitive Perspective,” Journal of Forecasting 12, no. 2 (1993): 161-186.

27. R. Hastie, S.D. Penrod, and N. Pennington, “Inside the Jury” (Cambridge, Massachusetts: Harvard University Press, 1983).

28. J. Klayman and Y.-W. Ha, “Confirmation, Disconfirmation, and Information in Hypothesis Testing,” Psychological Review 94, no. 2 (April 1987): 211-228; and J. Klayman and Y.-W. Ha, “Hypothesis Testing in Rule Discovery: Strategy, Structure, and Content,” Journal of Experimental Psychology: Learning, Memory, and Cognition 15, no. 4 (July 1989): 596-604.

29. T. Gilovich, “Something Out of Nothing: The Misperception and Misinterpretation of Random Data,” chap. 2 in “How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life” (New York: Free Press, 1991); see also N.N. Taleb, “Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets” (New York: Random House, 2004).

30. The best way to untangle the confounding effects is through controlled experiments, and even then it may be difficult. For a research example of how to do this, see P.J.H. Schoemaker and J.C. Hershey, “Utility Measurement: Signal, Noise and Bias,” Organizational Behavior and Human Decision Processes 52, no. 3 (August 1992): 397-424.

31. J.D. Sterman, “Business Dynamics: Systems Thinking and Modeling for a Complex World” (New York: McGraw-Hill, 2000).

32. For textbook introductions to some of these technologies, see J.M. Zurada, “Introduction to Artificial Neural Systems” (St. Paul, Minnesota: West Publishing Company, 1992); and S. Haykin, “Neural Networks: A Comprehensive Foundation,” 2nd ed. (Upper Saddle River, New Jersey: Prentice Hall, 1998).

33. “Finding a Voice,” Economist, Technology Quarterly, Jan. 7, 2017, pp. 3- 27; see also J. Turow, “The Daily You: How the New Advertising Industry Is Defining Your Identity and Your Worth” (New Haven, Connecticut: Yale University Press, 2011).

34. R. Copeland and B. Hope, “The World’s Largest Hedge Fund Is Building an Algorithmic Model From Its Employees’ Brains,” The Wall Street Journal, Dec. 22, 2016, www.wsj.com.

35. “Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,” JASON Study JSR-16-Task-003, MITRE Corporation, McLean, Virginia, January 2017, https://fas.org/irp/agency/dod.

36. Prediction banks are a special case of the more general notion of a setting up a mistake bank; see J.M. Caddell, “The Mistake Bank: How to Succeed by Forgiving Your Mistakes and Embracing Your Failures” (Camp Hill, Pennsylvania: Caddell Insight Group, 2013).

37. R. Feloni, “Billionaire Investor Ray Dalio’s Top 20 Management Principles,” Nov. 5, 2014, www.businessinsider.com.

38. A. Edmondson, “Psychological Safety and Learning Behavior in Work Teams,” Administrative Science Quarterly 44, no. 2 (June 1999): 350-383.

39. R.S. Michalski, J.G. Carbonell, and T.M. Mitchell, eds., “Machine Learning: An Artificial Intelligence Approach” (Berlin: Springer Verlag, 1983).

40. See, for example, H. Kunreuther, R.J. Meyer, and E.O. Michel-Kerjan, eds. (with E. Blum),“The Future of Risk Management,” under review with the University of Pennsylvania Press.

Acknowledgments

The authors thank Rob Adams, Barbara A. Mellers, Nanda Ramanujam, and J. Edward Russo for their helpful feedback on earlier drafts.