Using Simulated Experience to Make Sense of Big Data

As data analyses get more complex, how can companies best communicate results to ensure that decision makers have a proper grasp of the data’s implications?

Reading Time: 16 min 

Topics

Permissions and PDF

In an increasingly complex economic and social environment, access to vast amounts of data and information can help organizations and governments make better policies, predictions and decisions. Indeed, more and more decision makers rely on statistical findings and data-based decision models when tackling problems and forming strategies. Scientists, researchers, technologists and journalists have all been monitoring this tendency, trying to understand when and how this approach is most useful and effective.1

So far, discussions have centered mainly on analysis: data collection, technological infrastructures and statistical methods. Yet another vital issue receives far less scrutiny: how analytical results are communicated to decision makers. As the amount of data gets bigger and analyses grow more complex, how can analysts best communicate results to ensure that decision makers have a proper understanding of their implications?

Communicating Statistical Information

However well executed, the usefulness of an analysis depends on how the results are understood by the intended audience. Consider a patient visiting a doctor about an illness. Arguably, the most important task is the diagnosis of the disease, as this can lead to choosing an appropriate treatment. Yet even if the final decision lies with the patient, the chosen treatment may depend on how the doctor communicates different options to the patient. The same is true when an investor consults a financial expert or a manager seeks the services of a consulting firm.

Data science, like medical diagnostics or scientific research, lies in the hands of expert analysts who must explain their findings to executive decision makers who are often less knowledgeable about formal, statistical reasoning. Yet many behavioral experiments have shown that when the same statistical information is conveyed in different ways, people make drastically different decisions.2 Consequently, there is often a large gap between conclusions reached by analysts and what decision makers understand. Here, we address this issue by first identifying strengths and weaknesses of the two most common modes used for communicating results: description and illustration. We then present a third method — simulated experience — that enables intuitive interpretation of statistical information, thereby communicating analytical results even to decision makers who are naïve about statistics.

Description

Description is the default mode of presenting statistical information.

Topics

References

1. P. Simon, “Too Big to Ignore: The Business Case for Big Data” (Hoboken, New Jersey: John Wiley & Sons, 2013) offers an overview of business applications of data science. See also R. Fildes and P. Goodwin, “Against Your Better Judgment? How Organizations Can Improve Their Use of Management Judgment in Forecasting,” Interfaces 37, no. 6 (November-December 2007): 570-576.

2. A. Tversky and D. Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211, no. 4481 (January 30, 1981): 453-458; A. Tversky, P. Slovic and D. Kahneman, “The Causes of Preference Reversal,” American Economic Review 80, no. 1 (March 1990): 204-217; C.K. Hsee, G.F. Loewenstein, S. Blount and M.H. Bazerman, “Preference Reversals Between Joint and Separate Evaluations of Options: A Review and Theoretical Analysis,” Psychological Bulletin 125, no. 5 (September 1999): 576-590; A. Tversky and R.H. Thaler, “Anomalies: Preference Reversals,” Journal of Economic Perspectives 4, no. 2 (spring 1990): 201-211; and for a specific case study, see J. Koehler, “Psychology of Numbers in the Courtroom: How to Make DNA-Match Statistics Seem Impressive or Insufficient,” Southern California Law Review 74 (2001): 1275-1306.

3. E. Soyer and R.M. Hogarth, “The Illusion of Predictability: How Regression Statistics Mislead Experts,” International Journal of Forecasting 28, no. 3 (July-September 2012): 695-711.

4. R.M. Hogarth and E. Soyer, “A Picture’s Worth a Thousand Numbers,” Harvard Business Review 91, no. 6 (June 2013): 26.

5. D. Spiegelhalter, M. Pearson and I. Short, “Visualizing Uncertainty About the Future,” Science 333, no. 6048 (September 9, 2011): 1393-1400.

6. H. Rosling, “The Best Stats You’ve Ever Seen,” TED talk filmed February 2006, www.ted.com.

7. An interface shows the relationship between time and returns based on daily data. See: D. Egan, “It’s About Time in the Market, Not Market Timing.” October 14, 2014, www.betterment.com.

8. R.M. Hogarth and E. Soyer, “Sequentially Simulated Outcomes: Kind Experience Versus Nontransparent Description,” Journal of Experimental Psychology: General 140, no. 3 (August 2011): 434-463; R.M. Hogarth and E. Soyer, “Providing Information for Decision Making: Contrasting Description and Simulation,” Journal of Applied Research in Memory and Cognition, in press, published online January 29, 2014; R.M. Hogarth and E. Soyer, “Communicating Forecasts: The Simplicity of Simulated Experience,” Journal of Business Research, in press.

9. G. Shafer, “The Early Development of Mathematical Probability,” in “Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, Volume 2” ed. I. Grattan-Guinness (London and New York: Routledge, 1993): 1293-1302.

10. L. Hasher and R.T. Zacks, “Automatic and Effortful Processes in Memory,” Journal of Experimental Psychology: General 108, no. 3 (September 1979): 356-388; L. Hasher and R.T. Zacks, “Automatic Processing of Fundamental Information: The Case of Frequency of Occurrence,” American Psychologist 39, no. 12 (December 1984): 1372-1388; P. Sedlmeier and T. Betsch, “Etc. Frequency Processing and Cognition” (New York: Oxford University Press, 2002).

11. D.G. Goldstein, E.J. Johnson and W.F. Sharpe, “Choosing Outcomes Versus Choosing Products: Consumer- Focused Retirement Investment Advice,” Journal of Consumer Research 35, no. 3 (October 2008): 440-456.

12. M.A. Bradbury, T. Hens and S. Zeisberger, “Improving Investment Decisions With Simulated Experience,” Review of Finance, published online June 6, 2014.

13. C. Kaufmann, M. Weber and E. Haisley, “The Role of Experience Sampling and Graphical Displays on One’s Investment Risk Appetite,” Management Science 59, no.2 (February 2013): 323-340.

14. J.D. Sterman, “Communicating Climate Change Risks in a Skeptical World,” Climatic Change 108, no. 4 (October 2011): 811-826.

15. R.M. Hogarth, K. Mukherjee and E. Soyer, “Assessing the Chances of Success: Naïve Statistics Versus Kind Experience,” Journal of Experimental Psychology: Learning, Memory, and Cognition 39, no. 1 (January 2013): 14-32.

16. B.K. Hayes, B.R. Newell and G.E. Hawkins. “Causal Model and Sampling Approaches to Reducing Base Rate Neglect,” in “Proceedings of the 35th Annual Conference of the Cognitive Science Society,” eds. M. Knauff, M. Pauen, N. Sebanz and I. Wachsmuth (Austin, Texas: Cognitive Science Society, 2013.)

17. Probability Management is an organization that aims to improve communication of uncertainty through open-source decision support tools. More information can be found at www.probabilitymanagement.org.

Reprint #:

56215

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.