Why Forecasts Fail. What to Do Instead

The field of forecasting has advanced significantly in recent years. But managers need to learn from history about what they can and cannot predict, and develop plans that are sensitive to surprises.

Reading Time: 25 min 


Permissions and PDF

Courtesy Of Wikipedia/ Scanned From The Genius Of William Hogarth Or Hogarth’s Graphic Works

It seems like a long time ago in a galaxy far, far away. But in reality it was 2006, on this very planet. The entire world was booming, partly on the back of triple-A investment innovations devised by a master race of financial Jedi. And then: crash, bang, global recession. Suddenly it was all over. Triple-A turned into a euphemism for “subprime,” which itself began to translate into “toxic.” The banking Jedi were cast out with no bonuses — many into bankruptcy, takeover or nationalization. Welcome to the empire of the credit crunch.

The Leading Question

How can managers use forecasting tools to plan effectively and build better strategies?

  • In most areas of business, accurate forecasting is not possible. Future uncertainty is much greater than most managers acknowledge.
  • Statistical regularity does not imply predictability.
  • Instead of seeking predictability, managers should channel their efforts into being prepared for different contingencies.

By now, it’s a story as well known as “Star Wars.” But what fascinates us about the story of the crisis is one single, often overlooked fact — that almost no one saw it coming: none of the experts, none of the academics, none of the politicians and, as far as we know, none of the banking CEOs. So we think it’s time for business experts and practitioners to come to terms with the reality, harsh as it is, that accurate forecasts simply aren’t possible in their world. In addition to highlighting that alarming point, we’d like to offer some solace in the form of an analogy with natural disasters. We’ll also use our earthquake and hurricane comparisons to examine two types of uncertainty. Finally, we’ll provide a framework for making decisions, plans and strategies in the absence of accurate forecasts. Fundamentally, we believe that business needs a whole new attitude toward the future.

About the Research



1. See also, J. Surowiecki, “The Wisdom of Crowds” (New York: Anchor Books, 2005).

2.Paulson Says He’ll ‘Do What It Takes’ to Calm Markets,” Mar. 16, 2008.

3. G.W. Bush/press conference, James S. Brady Press Briefing Room July 15, 2008.

4. H.M. Paulson, “Paulson Testimony on Turmoil in U.S. Credit Markets,” Sept. 23, 2008.

5.Talks Implode During a Day of Chaos; Fate of Bailout Plan Remains Unresolved,” New York Times, Sept. 26, 2008.

6. International Monetary Fund, “World Economic Outlook 2007: Spillovers and Cycles in the Global Economy” (Washington, D.C.: IMF, April 2007).

7. International Monetary Fund, “World Economic Outlook” (Washington, D.C.: IMF, October 2007).

8. International Monetary Fund, “World Economic Outlook” (Washington, D.C.: IMF, April 2008).

9. International Monetary Fund, “World Economic Outlook 2008: Financial Stress, Downturns, and Recoveries” (Washington, D.C.: IMF, October 2008).

10. J.C. Cooper, “No Recession, But …,” BusinessWeek (Dec. 20, 2007).

11. N.N. Taleb, “The Black Swan: The Impact of the Highly Improbable” (New York: Random House, 2007).

12. S. Makridakis, R.M. Hogarth and A. Gaba, “Dance With Chance: Making Luck Work For You” (Oxford: Oneworld Publications, 2009).

13. A. Tversky and D. Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science, New Series 185, no. 4157 (Sept. 27, 1974): 1124-1131.

14. That also applies to samples that are imaginary. See P. Juslin, A. Winman and P. Hansson, “The Naïve Intuitive Statistician: A Naïve Sampling Model of Intuitive Confidence Intervals,” Psychological Review 114, no. 3 (2007): 678-703.

i. S. Makridakis and M. Hibon, “The Accuracy of Forecasting: An Empirical Investigation,” Journal of the Royal Statistical Society, Series A, 142, no. 2 (1979): 97-145.

ii. S. Makridakis, A. Andersen, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen and R. Winkler, “The Accuracy of Extrapolation (Time Series) Methods: Results of a Forecasting Competition,” Journal of Forecasting 1, no. 2 (September 1982): 111-153; and S. Makridakis and M. Hibon, “The M3-Competition: Results, Conclusions and Implications,” International Journal of Forecasting 16, no. 4 (October-December 2000): 451-476.

iii. For a further application of this method, see T.S. Pitsis, S.R. Clegg, M. Marosszeky and T. Rura-Polley, “Constructing the Olympic Dream: A Future Perfect Strategy of Project Management,” Organization Science 14, no. 5 (September-October 2003): 574-590.

Reprint #:


More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.

Comments (2)
JT Cooper
Forecasts fail because human beings not only develop the criteria that goes into determining the forecasts, humans also don't always behave the way that forecasts predict that they will.  If we were forecasting the behavior of robots, we could accurately depend on any forecast prediction.
Eva van Bodegraven
So why do forecasts fail? This is not explained in this article at all... From what I understand small uncertainties add up and can change the outcome enormously. Would be nice to either have a better explanation of why forecasts fail or at least an improved title that reflects the content.