Managing the Quality of Quantitative Analysis
Shouldn’t quantitative analysis — the results of which influence many managerial decisions — be held to “total quality” and “zero defect” standards? The author suggests that managers become exacting consumers of quantitative analysis, demanding and creating the proper environment for a high-quality product without logical or methodological defects. He shows managers how they can become more effective users of analysis, he identifies the ingredients of a sound quantitative analysis methodology, and he recommends ways to improve the quality of analysis in organizations.
- A Fortune “500” company uses discounted cash flow analysis to evaluate investment proposals. The company used the same discount rate from 1973 to 1986. Why? The formula for calculating the discount rate was established in 1973, the underlying methodology was never documented, and the person who derived the formula had left the company. Meanwhile, the prime rate changed from about 8% in 1973, to over 20% in 1981, to 8.5% in mid-1986.
- A large multidivisional organization uses pro forma models to project future sales, profits, cash flows, etc. It obtains corporate-level projections by consolidating division-level projections. The latter are estimated independently, with each division making its own assumptions about variables such as inflation rates, interest rates, and economic growth rates — variables that should be common to all divisions. How meaningful are the corporate-level projections?
- My MBA students developed a valuation model for a firm. Their valuations ranged from $20 million to $60 million. While trying to understand why the valuations were so different, we discovered that, in the case of at least one team, wide changes in critical assumptions did not result in expected changes in valuation. A painstaking study of the team’s spreadsheet revealed that some cells were not correctly referenced — a seemingly small blemish that cast a long shadow over the team’s effort.
- After a session on linear programming (LP) in an executive education program, one participant, an officer from the Department of Defense, mentioned that he had to allocate a multimillion-dollar order annually among competing bidders and — having just heard of LP — was eager to formulate the decision as an LP problem. It was fortunate he spoke with me before presenting his formulation to his colleagues: his was not an LP situation.
At first glance, these examples might be dismissed as an academician’s idle musings. That would be unfortunate: managers increasingly use basic quantitative tools such as financial modeling and spreadsheet analysis (and more sophisticated tools such as regression analysis, simulation, decision analysis, and optimization methods) to support decision making, and the most elementary of flaws can mar not only the quality of the analysis but also the quality of the decisions based on such analysis.
In the case of the first three examples, the flaws are straightforward and could easily have been avoided.