Managing the Quality of Quantitative Analysis
Shouldn’t quantitative analysis — the results of which influence many managerial decisions — be held to “total quality” and “zero defect” standards? The author suggests that managers become exacting consumers of quantitative analysis, demanding and creating the proper environment for a high-quality product without logical or methodological defects. He shows managers how they can become more effective users of analysis, he identifies the ingredients of a sound quantitative analysis methodology, and he recommends ways to improve the quality of analysis in organizations.
- A Fortune “500” company uses discounted cash flow analysis to evaluate investment proposals. The company used the same discount rate from 1973 to 1986. Why? The formula for calculating the discount rate was established in 1973, the underlying methodology was never documented, and the person who derived the formula had left the company. Meanwhile, the prime rate changed from about 8% in 1973, to over 20% in 1981, to 8.5% in mid-1986.
- A large multidivisional organization uses pro forma models to project future sales, profits, cash flows, etc. It obtains corporate-level projections by consolidating division-level projections. The latter are estimated independently, with each division making its own assumptions about variables such as inflation rates, interest rates, and economic growth rates — variables that should be common to all divisions. How meaningful are the corporate-level projections?
- My MBA students developed a valuation model for a firm. Their valuations ranged from $20 million to $60 million. While trying to understand why the valuations were so different, we discovered that, in the case of at least one team, wide changes in critical assumptions did not result in expected changes in valuation. A painstaking study of the team’s spreadsheet revealed that some cells were not correctly referenced — a seemingly small blemish that cast a long shadow over the team’s effort.
- After a session on linear programming (LP) in an executive education program, one participant, an officer from the Department of Defense, mentioned that he had to allocate a multimillion-dollar order annually among competing bidders and — having just heard of LP — was eager to formulate the decision as an LP problem. It was fortunate he spoke with me before presenting his formulation to his colleagues: his was not an LP situation.
At first glance, these examples might be dismissed as an academician’s idle musings. That would be unfortunate: managers increasingly use basic quantitative tools such as financial modeling and spreadsheet analysis (and more sophisticated tools such as regression analysis, simulation, decision analysis, and optimization methods) to support decision making, and the most elementary of flaws can mar not only the quality of the analysis but also the quality of the decisions based on such analysis.
In the case of the first three examples, the flaws are straightforward and could easily have been avoided. Why then did they occur? More important, how can managers guard against poor analysis — analysis that is flawed because it is based on the improper use of appropriate tools or models (due to poor choice of parameters, modeling inconsistencies, logical fallacies, or just plain mistakes, as in the first three examples), or because it is based on the inappropriate use of specific tools or models (as in the fourth case)?
These questions would not be so important if the above examples were isolated cases, or if the practice of quantitative analysis were confined to specialized groups within organizations. But my experience — in the MBA classroom, in executive education programs, in field interviews, and in conversations with former students —suggests that the incorrect or inappropriate use of quantitative tools is not unusual.
Personal computers and decision-support software products are “democratizing” the practice of quantitative analysis within organizations. This provides a unique opportunity but, unfortunately, also presents a risk. More managers can benefit from the decision support that quantitative analysis can provide, but if the resulting analysis is flawed, then the organizational decisions based on the analysis will also be flawed.
One way to manage the risk is by not using quantitative analysis, which unfortunately would limit managers’ access to useful decision aids. A second way is by confronting the problem head on. We hear frequently about total quality and zero defects in manufacturing, service operations, marketing, and customer service. Why shouldn’t quantitative analysis — the results of which influence many managerial decisions — be held to similarly high standards? Managers must become exacting consumers of quantitative analysis, demanding and creating the proper environment for a high-quality product without logical or methodological defects.
My goal in this paper is to help managers become more effective consumers of analysis. I identify the key ingredients of a sound quantitative analysis methodology. I conclude with some recommendations for improving the quality of analysis in organizations.
In a sense, this paper is a primer, a consumer’s guide to the practice of quantitative analysis. The problems and ideals alluded to are age-old, and the reader may find my comments obvious and a matter of common sense. Yet even common sense bears explication — if for no other reason than to remind us that some things are important, but somehow never get the attention they deserve.
Effective Consumers of Quantitative Analysis
In a course on computer modeling and decision making, my students once asked me to present my model and analysis of a case (we usually discussed and evaluated the students’ efforts). There were few questions until, almost at the end, a student asked me to show again a graph of the key cash flows. “The graph does not make sense,” he said, “for the following reasons . . . ” The spell was broken, and questions flowed. The students had just taken an important step toward becoming effective consumers of quantitative analysis: they were demanding quality, and they were willing to play an active role to get it. This role involves the following seven elements.
Define the right problem. As a rule, I begin by asking a student to describe, in plain English, the central problem in an assigned case. The instructions are clear: prioritize, avoid jargon, and articulate clearly the decisions to make, the objectives to achieve, the trade-offs to evaluate, and the factors to consider in the analysis. While the task seems simple enough, the results can be disappointing: often I hear a long-winded and incoherent collection of statements.
If I am disappointed to see students struggle with defining a problem, I am even more disappointed to hear managers in business and industry describe problems ambiguously. Unclear definitions can result in organizational frustration, unnecessary or incorrect analysis, lost time, and bad or ineffective decisions.
One solution is to put pencil to paper, to articulate — clearly, concisely, and coherently — the important dimensions of the problem by writing a manager’s brief to an analyst (the same person may perform both roles). If the brief is unclear, the ultimate product will be too, a point often lost during problem solving when time is essential and managers have more important tasks. The act of writing concentrates the mind wonderfully and increases the chance that the ensuing analysis will solve the problem, not just some problem. Indeed, while preparing the brief, the manager may begin to understand the problem so well that a formal analysis may be unnecessary. Even if an analysis is needed, a clearly defined problem will probably result in reduced problem-solving time.
Once the analysis is done, evaluate it thoroughly and carefully. Just because a problem is properly defined does not mean that the decision model, the analytical tools, and the analysis are not flawed. The analyst might be solving the wrong problem or solving the right problem incorrectly. Ultimately, the manager should guard against such an outcome. If he or she abdicates this responsibility, the quality of the analysis will almost certainly suffer.
A manager should insist on a thorough explanation of the analysis. Unfortunately, some managers think that all analyses are “guilty unless proven otherwise,” some believe that an analysis consisting of hard numbers must be “innocent unless proven otherwise,” and some remain detached, leaving the analysis to analysts. Such attitudes are unhelpful and unnecessary.
Understand the analysis in intuitive, nontechnical terms. Any good analysis must pass the “plain English” test. A manager must make sure that, if the results of an analysis are to be implemented in an organization, the affected people will accept them, intellectually and intuitively. If he or she is satisfied with some esoteric explanation and is unable to explain it to others in the organization, implementing the recommendations of the analysis will be difficult.
It is useful to understand here that an analysis can be correct without being complicated and complicated without being correct. Managers should be wary of any analysis that is presented to them in obscure, technical terms; many business problems do not require esoteric analysis.
Distinguish between good problem solvers and fancy “number crunchers”. Earlier, I commented on the tendency to accept recommendations as correct because they are backed by “hard numbers.” This respect for and fear of quantitative things often extends to the people behind the numbers and can result in potentially disastrous outcomes in decision-making situations.
Managers should clearly distinguish between the two types of analysts and should encourage and support problem solvers. They might need number crunchers in special situations, but managers should be wary of the temptation to treat quantitative analysis as an end in itself.
Beware of common analytical errors. The four examples at the beginning of this paper illustrate some of the errors that may occur in the course of quantitative analysis. Catching such errors often requires not advanced management-science qualifications, but an alert and skeptical mind.
Focus first on the content and then on the form of an analysis. Given the widespread availability of personal computer software with presentation-quality formatting capabilities, it is easy to spend countless hours on the improvement of the format and the looks of an analytical report. One way to guard against this seduction is to insist that presentation-quality reports be prepared only after the manager has thoroughly examined and signed off on the rough, unadorned output.
Be tolerant of mistakes — up to a point. If a manager wants the analyst to experiment, learn, and be honest, he or she should be tolerant of mistakes in a novice analyst’s work.
An anxious former student once called me about a problem she had after only six months on the job. She had just presented, to some senior executives in her firm, the results of an analysis that claimed significant profit-improvement opportunities. While some executives were skeptical of the results, the CEO was pleased and planned to present the results to the firm’s board of directors the following week. After the presentation, the student discovered a major flaw in her spreadsheet and — every analyst’s nightmare — the promised profit improvements disappeared. She wondered how to explain this to her boss. After a long conversation, we decided that the best course of action was to tell the truth. Fortunately, the chief executive accepted the student’s apology, saying in effect, “We all make mistakes; they’re useful only if we learn from them and grow.”
My student’s admission of her mistake and her manager’s forgiveness are the first steps in the human resource development necessary for high-quality standards in quantitative analysis. Of course, there should not be many mistakes, and there cannot always be forgiveness.
Ingredients of a Sound Quantitative Analysis Methodology
In this section, I detail the essential ingredients of a good analysis. While these may appear obvious, their lack results in many errors of analysis and, hence, in errors in decision making.
A clear, concise, and coherent problem definition. I have already discussed this in the previous section; it is a necessary ingredient in any analysis, quantitative or otherwise.
A transparent representation of the problem structure and the accompanying analytical model. Once the decision problem is properly defined, the analyst should ensure that the model develops in accordance with the problem brief. The consumers of the analysis should find the problem structure and the accompanying analytical model clear and in a language they can understand.
The analyst should avoid the comfort and safety of mathematical jargon. He or she should use whatever tool best meets the needs of the problem, but should effectively communicate the structure of the problem and the logic of the analysis to the manager, preferably with examples the manager can intuitively relate to.
As in defining the problem, any time the analyst devotes to the development of transparent problem and model representations will not be wasted; the effort will encourage better communication, improved problem understanding, and, perhaps, shorter overall problem-solving time.
Choice of the right problem-solving tool. Three first-year students once wanted help in using regression analysis in their summer projects. As in the case of the Department of Defense official in my fourth example, it was fortunate they spoke with me before discussing this with their colleagues: in all three cases, regression analysis was not the correct analytical tool to use.
These are not just isolated cases. Personal observations, conversations with executives about the practice of quantitative analysis in their organizations, and information from former students working in consulting and financial services organizations all suggest that many quantitative tools are used, not because they are appropriate to the problem being solved, but because the problem solvers are familiar with them or because the tools are in vogue.
Choice of the appropriate computational vehicle. In choosing the right method, begin with a “back of the envelope” analysis with no other computational aids than the human brain. The tedium of long-hand calculations and the paucity of space will discipline your approach. How far can you proceed with the analysis?
Now allow yourself the luxury of a calculator. How much better does the analysis get? Next, get more paper and use a computer. Does the analysis improve significantly? Finally, will the quality of the analysis be enhanced with unlimited computational resources? Anyone trying this experiment should not be too surprised to discover that the back-of-the-envelope method delivers a lot.
Unfortunately, analysts rarely practice such a disciplined approach. More often than not, the first thing they reach for is a computer’s on-off switch, only to discover — several hours and many recalculations later —that an informed evaluation of only three or four important numbers was necessary. Managers and analysts alike should remember that calculators, computers, and the like are aids; their use is not mandatory.
Explicit statement of all assumptions. In my third example, I discussed a student assignment on the valuation of a firm. The company’s chief executive, legal counsel, and investment bankers attended the class while the student teams were presenting their results. Needless to say, the wide discrepancy in valuations (between $20 million and $60 million) resulted in some embarrassed students and bemused visitors. Since the team presentations took up most of class time, we could not establish the reasons for the discrepancies in class. Only later, after wading through reams of computer printouts, scores of good-looking charts, and immense spreadsheets, were we able to pin down the principal reason for the difference between the extreme valuations: the respective teams had assumed very different price-earning ratios for calculating the terminal values of free cash flows.
Everyone should understand and discuss the important modeling assumptions at the beginning of any analysis. Heated arguments may result, but at least decisions will not be based on recommendations from analyses with hidden, questionable assumptions, which will result in a reexamination of assumptions and a loss of credibility for the complete analytical effort.
Systematic spreadsheet development and documentation of the logic. Spreadsheet development is an art. My experience suggests the following minimum requirements. A good spreadsheet has a logical structure that the analyst and the decision makers can easily understand — not only at the time of the analysis, but in the future. Careful planning of the spreadsheet’s layout may take special effort, but it aids effective communication.
I find it useful to structure the spreadsheet in a sequence of pages, each accessible with the [PgUp] and [PgDn] keys. I use the first page (or the first few pages) of a multipage spreadsheet to enumerate the important assumptions of the model and to summarize the key results. This scheme offers at least three advantages. First, by placing the important assumptions in one location and driving the calculations off them, there is no risk of changing an assumption in one part of the spreadsheet, but not in another part — a frequent and embarrassing problem. Second, when examining several alternatives, one can be sure that the consequences of the alternatives are compared for the same set of assumptions. And third, because the assumptions and results are proximate to each other, it is easy to see how the results change with changes in the assumptions. I have handed over such spreadsheets to others without instructions, and they have used them without any of the typical frustrations when using someone else’s spreadsheets.
While a well-organized spreadsheet goes a long way toward “user-friendliness,” it is not necessarily a good spreadsheet; for that, its logic must also be correct. There is only one way to make sure that mistakes of logic, programming, or keying do not creep into the analysis: a step-by-step debugging, preferably with a simple example that can be analyzed with the help of a sheet of paper and a pencil. This procedure might seem a waste of time, but it will go a long way in avoiding costly mistakes and major embarrassments.
Suppose you have a well-structured spreadsheet that has been carefully debugged. You are not yet done: most spreadsheet software products do a poor job of documenting the modeling logic. Like the other ingredients mentioned in this section, documentation of the logic is a thankless, time-consuming task, not necessarily as exciting as developing the model itself. But pull up a spreadsheet you developed, say, six months ago, choose a cell at random, try to figure out what is being calculated, and you will realize why documentation is crucial: “What does the entry in cell AK267, ‘=$B$4^K29+@SUM($A1OO..$A153)*E67,’ mean?,” you will ask yourself. Deciphering even simple cell entries can prove a difficult task, let alone @IF statements, macros, etc. Documentation might be tedious, but it is a necessary detail.
The sensitivity of the analysis to changes in assumptions. No analysis is complete until the analyst has explored the sensitivity of the recommendations to changes in the assumptions. While this step is emphasized in the teaching of most analytical tools, it is often overlooked because of time pressures.
A systematic sensitivity analysis serves at least three functions. First, to the extent that sensitivity can be examined only for known assumptions, it underscores the importance of an explicit recognition of the important assumptions. Second, it improves the decision maker’s understanding of the problem. And third, it is a useful way to identify and eliminate logical and methodological errors.
Credible and effective communication of justified recommendations. It is crucial to remember that quantitative analysis is a means to, not an end of, a business decision. The analyst’s job is not done until he or she offers — effectively and credibly — the recommendations that he or she believes are justified.
In this context, the analyst has a responsibility to present (tactfully) things as they are, not alter them so they appear as others want them to be. It is very easy to change a number here or there, to rescale a graph, or to disguise an important assumption, thus altering the nature of the recommendation and getting a desired result. Furthermore, the organizational pressures to do so can be significant. The analyst must resist these adjustments at all cost; otherwise the whole methodology of quantitative analysis will lose credibility in the organization.
Here are some rules for effectively communicating the recommendations of an analysis:
- Be simple, but not condescending.
- Be straightforward. Complexity is often an obstacle and rarely a virtue. Avoid jargon.
- Accept criticism and an honest appraisal (including self-appraisal) of the model and the analytical results.
- Be honest.
- Make creative use of multiple media: transparencies (with drawings, text, tables, charts), blackboard, paper and pencil, even the computer.
Managing for Analytical Quality
The previous two sections considered the roles of the consumer and the analyst in improving the methodology and effectiveness of quantitative analysis. This section focuses on the organizational initiatives necessary for quality improvement.
Surveying the status and validity of quantitative analysis in the organization. An executive from a large paper-manufacturing company participated in a six-session introductory module on relevant cost analysis, discounted cash flow analysis, decision analysis, and linear programming. The executive was already a consumer of these analytical tools, but it was only during the course that he gained some understanding of the “products” he had been consuming. He also learned the questions to ask and the traps to avoid. Armed with this newfound knowledge, the executive proposed a survey of the status and validity of quantitative analysis in his organization. He had two goals: (1) now that he understood the analysts’ language, he wanted to find out what they were doing and to see if their analyses were correct; and (2) he wanted to apply some of the tools he had learned in the six sessions.
This executive’s idea to survey the practice of quantitative analysis in his organization is intriguing. In this paper, I have given several examples of poor analytical practice that many of us can relate to. It is frightening to think that important decisions may be based on erroneous analyses, not in a handful of organizations, but across a spectrum of industries. At the same time, managers do not use some analytical tools because they are not aware of their potential. This is particularly disheartening because computer hardware and decision-support software are now so readily available that managers can act as their own analysts and easily apply a range of quantitative tools. Both considerations — the possibility of serious errors in the quality of analysis and the potential for useful applications of quantitative methods — strongly support the executive’s proposal of a survey.
How should such a survey be carried out? Ideally, the person conducting the survey should be from outside the business unit or organization that is being studied; not only would an “outsider” be more objective, he or she would be able to offer a new perspective on the analytical problems and methodologies being surveyed. While the surveyor should have a strong background in the use of quantitative methods or analysis, he or she need not have an advanced mathematical degree or a senior academic appointment: these qualifications often lead to a fascination with the problem-solving tool at the neglect of the problem. How do you find a person who is good at analysis and, at the same time, genuinely interested in — and capable of — helping managers become discerning consumers of these tools? Unfortunately, there is no easy answer.
The surveyor should begin by developing two catalogs. The first catalog should identify the different quantitative analytical methodologies that are employed, the analysts who execute these methodologies, the set of decisions that are made with the help of the analysis, and the decision makers. A second catalog should focus on the full range of decisions that the organizational unit has made — whether or not it has used quantitative analysis. While the catalogs will be useful end products themselves, the principal value of the exercise is in the information-gathering process: the surveyor will have to talk with both the analysts and the consumers of analysis, and the dialogue will almost certainly cast quantitative analysis in a new light. It will also suggest additional opportunities for the application of quantitative methods in support of decision making.
The next step is to gain feedback from the analysts and the consumers, going well beyond the two catalogs. I suggest a multisession feedback process, where the surveyor alternates between (1) comments on the specific methodologies, applications, and decision contexts in the organizational units; and (2) discussion of other cases. Outside examples do more than just motivate. One, because they are from the outside, they do not necessarily threaten the analyst or consumer, minimizing the chances of their reacting defensively. Two, because they can be stylized, they can offer simple paradigms that people can relate to. And three, they require the survey participants to distance themselves from familiar territory and return — I hope — via a different path. After the survey feedback, the analyst, the decision maker, and the external facilitator should continue to communicate periodically about the practice of quantitative analysis.
Overcoming “math-phobia” in the organization. Anxious executives in my executive education classes on quantitative methods are usually unsure if they can handle the “math.” Similarly, my MBA students often say, “It’s been ten years since I studied math in high school. I didn’t understand it then, and I’m scared of it now.”
If these are representative samples of managers’ receptiveness to the application of quantitative methods in decision making, it is little wonder that the quality of analysis is occasionally poor. The fear of mathematics is a very real phenomenon that cannot simply be wished away, but most real life applications of quantitative analysis do not require complicated mathematics. Indeed, all that is required is common sense, the patient application of addition, subtraction, multiplication, and division, some perseverance, and a dogged insistence that the analysts explain their work simply and clearly.
In the past, when prospective students have voiced their fears of things quantitative, I have always appealed to their faith in themselves and asked them to try out the first few classes. If the feedback I have received can be trusted, most of these students have not regretted the advice. It takes no more than two or three classes before the phobia disappears and is replaced, not by destructive overconfidence, but a healthy faith in the student’s own abilities, a receptive attitude toward the use of quantitative methods, and a questioning eye. This observation leads me to suggest that one way to overcome institutional “math-phobia” is to bring the classroom to the company and create a nonthreatening environment that is conducive to acquiring and applying quantitative analysis skills. Some companies have training programs to teach basic quantitative tools to their young, incoming employees. Why don’t they offer similar programs for existing employees and senior managers, albeit with different instructional materials, pedagogical styles, and learning agenda? These people are likely to be older, more fearful of using quantitative methods, and —paradoxically — more likely to be the ultimate consumers of analysis. They are the ones who can benefit most from the programs.
Initiating training in the proper application of quantitative techniques within the organization. Formal and informal education in commonly used analytical techniques can go a long way toward improving the overall quality of the quantitative analysis in an organization, and the learning takes place in a nonthreatening environment. Some organizations have such programs, but most shy away from them, partly because the benefits are not easily discerned and partly because of the cost. However, companies spend large sums of money on sophisticated computer hardware and software and many hours on developing spreadsheets, charts, reports, etc. — investments that are wasted if the people do not know how to use the resources effectively. Also, wrong decisions resulting from erroneous analysis incur costs, and opportunities for new applications of analysis are missed.
Making appropriate investments in infrastructure. The application of quantitative methods in an organization needs investments not only in people but also in the supporting infrastructure, such as computer hardware, software, data, etc. In this context, organizational investment in computer hardware and software is already occurring rapidly. While there are legitimate questions about the fit between the type of investment and the organization’s needs, the more important problem in this area is the gap between the resources already available to managers and analysts, and the managers and analysts’ abilities to apply the resources to effective analysis. The discussion in this paper should help close some of the gap.
The case of data deserves a paper of its own. Many organizations do not have systems to generate data in a useful form for quantitative analysis. Often, economic data come from antiquated accounting and performance systems not suited to decision making; operational data tend to be out of date and unreliable; and external data are difficult to import in electronic form. Patched-up information systems of multiple vintages add to the complexity associated with the management and use of data.
Conclusion
On the one hand, business organizations (and the business schools that provide some of the ultimate consumers of quantitative analysis) are investing significantly in computational hardware and software and developing spreadsheets of all types and sizes. On the other hand, the same institutions are increasingly turning their backs on “bean-counters.” This paradox seems strange and difficult to explain. However, the two trends are not contradictory. The first trend offers a unique opportunity to bring quantitative analysis closer to the manager. The second trend suggests that the practice of quantitative methods must change, and analysis should no longer be used to justify managerial decisions, almost to the exclusion of other, more relevant criteria. Indeed, the second trend is justified in the context of the relatively poor quality of quantitative analysis that often prevails in the business world. However, by giving proper attention and demanding quality, managers can become exacting consumers and can improve the quality of quantitative analysis.