Performance variability frustrates managers everywhere. It takes a variety of forms: vastly different sales figures for similar retail stores in similar neighborhoods; significantly varying productivity rates at factories producing the same products; major differences in insurance payments for similar auto accidents. Companies make strenuous efforts to reduce such differences as the financial benefits that result when laggards imitate leaders are often immense. For example, Ford Motor Co. claims to have saved $886 million after four years of sharing best practices throughout its manufacturing sites.1

In their quest to reduce performance variability, however, managers often go too far. By forcing workers to “copy exactly” or “follow instructions exactly” in every situation, they make it far more difficult for people to use their own judgment and knowledge to solve problems that would benefit from a new approach. Hence the dilemma: How can companies reduce performance variability without stifling their employees’ discretion and ability to innovate?

The answer lies in the distinction between processes and practices. Many efforts to reduce variability focus on refining processes as the primary intervention — the enormous success of Six Sigma at General Electric Co. and Motorola Inc., for example, results from the use of established statistical process controls to eliminate deviations in quality. Despite such process change, however, variability often persists because of differences in practice.2 While a process outlines how tasks are to be organized, practice refers to the way those tasks are understood and actually performed. And practice is rarely based on narrow definitions that show how to complete a job from A to Z; more often, it stems from stories, principles, heuristics (rules of thumb) and expertise that emerge over time and combine to create a basis for action.3 The fluid nature of practice, then, generates new approaches to work, while process refinements make existing work approaches more efficient. To get the best of both, managers must find a balance between streamlining processes and allowing employees the freedom to improve practices.

Such balance is difficult to achieve. It has been argued, in fact, that managers have to choose between innovation and replication — they can’t have both — because effective replication does not allow for adaptation.4 According to this argument, the invisible complexities of most systems prohibit successful attempts to cherry-pick parts of a process or to customize processes on the basis of local conditions. Managers should instead copy a template as closely as possible.

Similarly, it has been suggested that companies should create “ambidextrous” or dual organizational forms in order to isolate process from practice.5 According to this view, process management leads to a diffusion of techniques that favor “exploitative” activities — those in which innovation occurs along an existing technological trajectory and within an established customer or market segment — over purely exploratory activities. The two types of work need to be kept in separate units, by this logic, with strategic integration achieved by the senior team.

In suggesting such a drastic separation of approaches at the corporate or division level, this argument misses a fundamental point: Individual managers already make trade-offs between process and practice every day. Having studied this issue in-depth, we found that the appropriate intervention to reduce differences in performance is not a matter of organizational change; rather, it depends on individual work practices — their frequency and predictability. (See “About the Research.”) Practices that are more frequent and predictable tend to be more conducive to rigid duplication, while those that are rare and unpredictable have greater need for flexibility and innovation. That’s why it’s not enough to have a balance between uniformity and discretion at the company level: Each group of practitioners within an organization must also have it.

About the Research »

When a balance exists, employees are able to resolve problems with the greatest efficiency. To achieve it, companies must categorize work practices by frequency and predictability. Next, they should create an integrated program so that employees can take different approaches as circumstances dictate. But before they embark on these activities, managers should identify which variations in performance are most important to the organization.

Identifying Key Areas of Variability

Not all variations in performance are equally important to companies, so it is necessary to decide which areas merit the greatest attention. To do that, managers should identify variation in key metrics: Where are differences in performance causing the most competitive damage? For a retail outlet, it may be the average purchase size or labor costs; for a sales operation, it may be the close rate of its salespeople.

Some variations can be traced to things beyond managerial control, such as inflation or an industry downturn. To carry out proper comparisons, executives should look at peers within the organization. A bank’s senior managers, for example, should compare branches in similar competitive environments, and the leaders of a manufacturing company should compare factories producing similar products.

The next step is to clarify targets — that is, to decide on the outcome that should apply across the board. A target is typically based on the top performance for a given metric; a retail outlet, for example, boasting the highest average purchase size in its group is likely to have its labor-cost targets determined by the store with the lowest costs. Finally, managers focused on this issue must estimate the cost of reducing a given variation; the change must be significant enough to warrant the expenditure.

The exploration and production business of British Petroleum (now known as BP Plc) divided its 40 business units into four peer groups on the basis of stages in the exploration and production process — finding a new field, developing a new field, maintaining production in a mature field or ramping down a late-stage field.6 The units within each group face similar challenges and have clear performance metrics. For example, the group that develops new fields focuses on the time it takes to drill a deepwater well. Over the course of two years, it reduced the average time from 100 to 42 days, taking a large chunk out of the $2 billion BP spends on drilling each year. Like most peer groups, they aren’t static — units shift into new peer groups when they enter new phases of the business life cycle.

Of course, in addition to measuring variability against other units within the company, it is also critical to use competitive benchmarks to measure performance variation.7 If an organization’s best unit is equal to the average unit at a competitor, sharing internal best practices will not be a winning strategy.

Categorizing Practices

Once the metrics and benchmarks are clear, managers can analyze the frequency and predictability of the practices that affect outcomes. The frequency of a practice is important because it determines the value of codification; the higher the frequency, the more it makes sense to codify rules or analogous cases. The degree of predictability is important because it determines the need for judgment to override guidelines: The less predictable the situation, the greater the need for flexibility within interventions. (Employee experience is another important factor. Less experienced people are often likely to value rules, while more experienced people are more likely to want — and deserve — flexibility to experiment. The most experienced practitioners, “wise old bulls,” usually drive innovative problem solving and informally govern the evolution of a given practice.)

Once managers categorize the practices, they can reduce variation in ways that are appropriate to specific practices. (For a graphic representation of the process, see “Three Steps to Reducing Variability.”) The results will vary along with the methods: Although it is possible to virtually eliminate variability within predictable practices, it is more difficult (and less desirable) to eliminate variability completely within unpredictable practices. But the financial rewards of even moderate reductions in differences can still be significant. On the other hand, practices that are both low in frequency and highly predictable typically don’t have enough financial impact to justify intervention.

Three Steps to Reducing Variability

View Exhibit

High Frequency/High Predictability.

When practices are high in frequency and predictability, variability can be reduced by standardizing the practices with such established methods as rules, templates, decision algorithms and process manuals.

Ford’s Best Practice Replication Process, for example, is an intranet-based process for collecting, distributing and tracking the value of practices at 38 manufacturing plants.8 Ford found dozens of performance differences between plants, such as the time it takes to install wheels on a car: It took 1.4 minutes in Atlanta but up to 4 minutes in other plants, so the Atlanta plant manager submitted detailed instructions on how to replicate that location’s speed. For each of 150 process chunks like wheel installation, plant managers described the practices at their factory for completing the process, and Ford identified a benchmark that each plant should strive to meet. There is room for innovation as older practices are replaced by new techniques, but at any given time, the system recommends one best way to perform each practice. Participation is optional but strongly encouraged through incentives: The formal performance-ranking system used to evaluate plant managers includes an assessment of whether they are using the tools.

Intel Corp. has a stricter approach. It makes the cloning of its plants in every detail mandatory.9 When variations between plants in the early 1980s hurt productivity and product quality, Intel created the Copy Exactly program. In order to discourage experimentation at individual factories, hundreds of rotating technicians, called “seeds,” transfer manufacturing techniques from plant to plant. The techniques include not only major process steps but also background details such as the color of workers’ gloves and the paint on the wall. This isn’t micromanagement; small variations really matter in chip production. In one case, for example, technicians found that identical tools were producing different defect rates because of the way in which the workers were wiping the tool (circular versus back and forth). The technicians determined which direction worked better and made it the rule.

In both of these cases, the relatively high level of predictability allows practitioners to reduce variability by removing much of the judgment from the process. Using this kind of intervention means that while change is possible, it is not done lightly. At Intel, any new ideas must be submitted to a committee called the “Process Change Control Board” and must have demonstrated value. More flexible interventions, however, are required for less predictable practices.

High Frequency/Low Predictability.

When practices are frequent but unpredictable, variability can’t be reduced through strict rules. Instead, practitioners should be given the option of following suggestions based on case examples, a Q&A database or a set of heuristics.

Managers face many recurrent tasks whose outcome is far from assured — preparing employees to outgrow their current jobs, for example, or trying to turn a team into a meritocracy. Those were two important aspects of the manager’s role discussed during ManagerJam, a 48-hour online event hosted in 2002 by IBM Corp. for 32,000 of its managers.10 As the participants contributed their ideas in a process of culling best practices, event moderators selected ideas that appeared suitable for immediate implementation. The managers then rated each one as “ready now” or “almost ready” (requiring further analysis or debate). The key is that adoption of the ideas was not mandatory; although 60% of managers claimed they would apply the best practices in their work, they could choose which practices seemed most relevant and adapt them to their own situations.

Partners HealthCare System Inc. developed a more systematic approach to common problems that still allows room for judgment.11 Partners, a health care provider made up of multiple institutions and based in Boston, noticed that its physicians ordered lab tests and prescribed drugs inconsistently. It determined that more than half of the lab tests being ordered were clinically unnecessary and that inappropriate prescriptions were the cause of more than half of the adverse drug reactions experienced by patients under the care of a Partners doctor. The company then created an online system to synthesize ordering guidelines, which reduced serious medication errors by 55% and saved money spent on unnecessary care. Individual doctors are relieved of the responsibility for staying informed about the latest facts on 3,000 medications and 1,100 laboratory tests: The system provides them with suggestions for appropriate prescriptions and lab tests that have been generated by committees of physicians focused on particular domains such as radiology.

Unlike the Ford and Intel systems, this one lets practitioners use their own judgment: Physicians can reject suggestions because of mitigating circumstances. The sheer number and unpredictability of potential mitigating circumstances make it impossible to turn the guidelines into iron laws of practice, but the high number of orders entered — 13,000 per day at one hospital alone — and the savings that result from fewer tests and better prescriptions mean that the system is well worth the expense of developing it.

Low Frequency/Low Predictability.

When practices are unpredictable and infrequent, it is rarely worth the effort to provide guidelines indicating alternative responses. In this case, the most effective way to reduce variability is by providing access to expert advice. That can be done by establishing communities of practice, creating expert-locator systems and setting up help desks. Practitioners may also want to complete after-action reviews or simulate hypothetical recurrences to retain the lessons learned from a particularly important practice.12

In the Partners system, physicians sometimes use videoconferencing to consult with experts in real time. For example, if a patient in a remote area needs to be diagnosed rapidly, specialists can interview and observe the patient on a video screen, and then recommend the appropriate treatment.

Similarly, BP’s virtual team network has been used for several years to reduce variability in such metrics as the time required to solve problems on offshore rigs.13 The network of computers allows people to use electronic yellow pages in order to find functional experts and get their advice in real time using videoconferencing and electronic blackboards. Problems get solved faster and fewer helicopter trips to offshore rigs are needed. For example, when a mobile drilling ship experienced equipment failure in 1995, an expert analyzed the hardware by video over a satellite link and quickly diagnosed the problem; the operations were up and running in a few hours instead of the days it would have taken to fly out the expert by helicopter or send the ship back to port.14

The challenge, of course, is to use experts’ time wisely. In order to save experts the trouble of offering advice repeatedly on similar problems, it’s important to document their knowledge in process steps and case examples and to use this intervention only for truly uncommon situations.

Creating an Integrated Program

As the Partners case suggests, practitioners often need to combine different types of interventions in order to deal with different types of practice. Partners could broaden its repertoire even further if it wished to focus on highly predictable practices: Children’s Hospital and Health Center in San Diego, for example, has recently created “pathways” that standardize medical care for certain conditions like asthma attacks.15

In many cases, use of the full range of interventions will have the greatest impact on the effectiveness of a group of practitioners — and that will become increasingly true for employees focused on services at companies like GE and IBM, where the historical emphasis on standardized processes in aid of manufacturing will no longer be suitable. Consider how practitioners employ a range of options in three examples. (For an overview, see “Taking a Balanced Approach.”)

Taking a Balanced Approach

View Exhibit

Geoscientists at Shell.

A group of about 125 geoscientists at Royal Dutch/Shell Group, dubbed the “Turbodudes” after a type of geological structure that they analyze, handles similar technical problems, but as individual members of project teams working at different locations.16 Together they work to reduce the variability of site-development costs, primarily by focusing upon one critical low-frequency/low-predictability decision — that is, whether or not to drill at a specific site. They seek each others’ advice on a weekly basis and collaboratively interpret data to determine the likelihood of successful drilling. By means of such team interactions, they have reduced variability in the drilling success rate at Shell sites, which — by avoiding unnecessary drilling and testing at three sites per year — has resulted in an estimated savings of $120 million annually.

On the basis of their frequent analyses, these scientists also began to create categories of structures to facilitate their rapid identification. Because these categories are not entirely predictive of drilling outcomes, they serve mainly as guidelines for action rather than as strict templates. However, the group did create templates for frequently occurring activities that exhibit greater predictability. For example, they routinely assess the volume of oil in reservoirs, and in the past such estimates varied wildly. By creating a standard methodology that is now used by all the Turbodudes, they have greatly reduced the variability of these estimates.

Urban Planners at the World Bank.

The Urban Services Thematic Group of the World Bank comprises about 100 urban planners whose main focus is undertaking projects to upgrade slums.17 This group has worked to reduce regional variability in the outcome of its efforts — as indicated by the quality of basic services like water, sanitation, waste collection and street lighting.

Like the Shell geoscientists, the urban planners spend a lot of time helping each other solve infrequent and highly unpredictable problems. In one case, the bank’s country director for Bangladesh encountered political opposition to the bank’s proposal to upgrade slums; opponents advocated razing the slums instead of upgrading them. In response, government representatives posed questions about the alternatives to the World Bank director, the bank’s local urban specialist and to Urban Services members worldwide. From the latter group, they soon received input corroborating the value of upgrading slums. Urban Services respondents asserted that clearing slums and resettling the inhabitants had never worked on a sustainable basis in any of the world’s developing economies and typically had cost 10 to 15 times as much as upgrading. In contrast, upgrading slums had been successful in dozens of locations.

The urban planners are also codifying their collective knowledge of more-routine practices. To prepare for a highly unpredictable practice like designing street-addressing systems (slums typically grow haphazardly, resulting in a maze of unpaved footpaths), the group held a workshop to glean valuable knowledge from the experience of 10 African countries; the group then documented the innovations and created a how-to manual in four languages. For a more predictable activity, such as designing and implementing a large-scale upgrading program, topics like how to appropriately staff a team are outlined in video and text in an electronic toolkit on compact disc.

Financial Advisors at Clarica.

Working for Waterloo, Ontario-based Clarica Life Insurance Co., a group of 200 independent advisors is part of “The Advisor Network.” A main objective of this group is to reduce the variability in close rates and the value per sale among the advisors in order to serve more customers and increase the revenue per customer. Less experienced members gain knowledge that can help advance their careers, and seasoned advisors are able to test themselves with innovative thinking.

In infrequent and unpredictable situations, an advisor may start an online dialogue to elicit advice from peers. In one example, an advisor posted a case to the discussion database, outlining the financial situation and goals of a young couple that had received a substantial cash gift. Other advisors commented, debating the pros and cons of whole life insurance, term life insurance, mortgage insurance and money market options, as well as their tax implications. The advisor then met with his clients, discussed options generated by the dialogue and reported the result to the advisor community: two new life policies, an increase in an existing policy, two retirement fund contributions, a term deposit and a financial plan. His value per sale was much higher than it would have been had he dealt with the situation alone.

To prepare for situations that happen more often, the advisors read and discuss case examples. In one instance, the group documented half a dozen cases illustrating the common challenges of approaching affluent clients. Workshop discussions have led to consensus on the establishment of some rules of thumb for overcoming common barriers to making certain types of sales. In addition, advisors also have created templates and tools for the routine parts of their job, including a target-market checklist to help find potential clients, a business-letter template to help find prospective customers and a PowerPoint presentation template to help with introductory interviews. Such aids help newer agents find the right prospects and improve their value per sale.

AS THESE EXAMPLES show, groups of practitioners trying to reduce variability don’t need to decide between rigidly codifying processes and exercising individual judgment. Depending on the frequency and predictability of the practices they’re trying to improve, they can use a combination of interventions that, taken together, allow the group to take advantage of what they’ve learned yet continue to innovate. For executives focused on reducing performance variability in all its varied forms within their organizations, the flexibility provided by a practice-based approach offers the best combination of uniformity and discretion.

References

1. S. Kwiecien and D. Wolford, “Gaining Real Value Through Best-Practice Replication: How Ford Motor Company Counts the Returns on Knowledge Efforts,” Knowledge Management Review 4 (March–April 2001): 12–15.

2. J.S. Brown and P. Duguid, “Balancing Act: How To Capture Knowledge Without Killing It,” Harvard Business Review 78 (May–June 2000): 73–80; and J.S. Brown and P. Duguid, “Creativity Versus Structure: A Useful Tension,” MIT Sloan Management Review 42 (summer 2001): 93–94.

3. E. Wenger, R. McDermott and W.M. Snyder, “Cultivating Communities of Practice” (Boston: Harvard Business School Press, 2002), 38.

4. G. Szulanski and S. Winter, “Getting It Right the Second Time,” Harvard Business Review 80 (January 2002): 62–69. For more details, see G. Szulanski, “Sticky Knowledge: Barriers to Knowing in the Firm” (Thousand Oaks, California: Sage Publications, 2003).

5. M.J. Benner and M. Tushman, “Exploitation, Exploration and Process Management: The Productivity Dilemma Revisited,” Academy of Management Review 28 (April 2003): 238–256; and M. Benner and M. Tushman, “Process Management and Technological Innovation: A Longitudinal Study of the Photography and Paint Industries,” Administrative Science Quarterly 47 (December 2002): 676–706.

6. S.E. Prokesch, “Unleashing the Power of Learning: An Interview With British Petroleum’s John Browne,” Harvard Business Review 75 (September–October 1997): 146–168.

7. It is beyond the scope of this article to discuss the process of benchmarking, but many others have explored that topic in detail. See, for example, F.G. Tucker, S.M. Zivan and R.C. Camp, “How To Measure Yourself Against the Best,” Harvard Business Review 65 (January–February 1987): 8–10; and R.H. Hayes and G.P. Pisano, “Beyond World-Class: The New Manufacturing Strategy,” Harvard Business Review 72 (January–February 1994): 77–86.

8. Kwiecien and Wolford, “Gaining Real Value.”

9. D. Clark, “Inside Intel, It’s All Copying — In Setting Up Its New Plants, Chip Maker Clones Older Ones Down to the Paint on the Wall,” Wall Street Journal, Oct. 28, 2002, p. B1.

10. L. Dorsett, T. O’Driscoll and M.A. Fontaine, “Redefining Manager Interaction at IBM: Leveraging Massive Conversations To Exchange Knowledge,” Knowledge Management Review 5 (September–October 2002): 24–28.

11. T. Davenport and J. Glaser, “Just-in-Time Delivery Comes to Knowledge Management,” Harvard Business Review 80 (July 2002): 107–111.

12. J. March, “Learning from Samples of One or Fewer,” in “The Pursuit of Organizational Intelligence” (Malden, Massachusetts: Blackwell Publishers, 1999): 137–155.

13. Prokesch, “Unleashing the Power of Learning.”

14. T.H. Davenport and L. Prusak, “Working Knowledge” (Boston: Harvard Business School Press, 1997), 21.

15. B. Wysocki, Jr., “Follow the Recipe: Children’s Hospital in San Diego Has Taken the Standardization of Medical Care to an Extreme,” Wall Street Journal, April 22, 2003, p. R4.

16. Wenger, et al., “Cultivating Communities,” 94–95.

17. W.E. Fulmer, “The World Bank and Knowledge Management: The Case of the Urban Services Thematic Group,” Harvard Business School case no. 9-801-157 (Boston: Harvard Business School Publishing, 2001).