Strategic Innovation and the Science of Learning

Reading Time: 26 min 

Topics

Permissions and PDF Download

Entrepreneurship is a competence in only the rarest corporation. Pity, as its absence has led to the death of many revered companies. In an economic environment characterized by dramatic change, the ability to explore emerging opportunities by launching and learning from strategic experiments is more critical to survival than ever.

A strategic experiment is a risky new venture within an established corporation. It is a multiyear bet within a poorly defined industry that has no clear formula for making a profit. Potential customers are mere possibilities. Value propositions are guesses. And activities that lead to profitable outcomes are unclear.

Most executives who have been involved in strategic experiments agree that the key to success is learning quickly. In a race to define an emerging industry, the competitor that learns first generally wins. Unfortunately, habits embedded in the conventional planning process disable learning. A better approach, theory-focused planning, differs from traditional planning on six counts.

The Need for Strategic Innovation

In the late 1990s, Corning Inc. began to explore a possibility far beyond its existing lines of business. The strategic experiment, Corning Microarray Technologies (CMT), sought to usher in a new era in genomics research. (See “About the Research.”) DNA microarrays, glass slides with thousands of tiny DNA samples printed on their surfaces, were a key piece of experimental apparatus for measuring DNA interactions in large sample sizes. Seeking to disrupt a status quo that offered researchers a devil’s choice between time-consuming self-printing and the purchase of an expensive closed-standard system, CMT sought to introduce reliable, inexpensive microarrays as part of a new open-standard system.

About the Research »

With the anticipated explosion in genomics research that followed the completion of the mapping of the human genome, CMT expected a robust market. Still, the unknowns were daunting. Would a standard compatible with CMT’s product be widely adopted? Would Corning’s expertise in adhering tiny quantities of fluid to glass be readily transferred to microarrays? Could CMT lower costs to a point that compelled laboratories to invest in entirely new systems for genomics experimentation?

During recent years of economic malaise, many corporations have decided against such strategic experiments. Only a few have taken significant risks, recognizing that cycles of boom and bust mask a fundamental truth: The world is always changing. The pace of change does not mirror the manic financial markets; it is steadier and surer. Globalization brings new markets, nontraditional competitors and new sources of uncertainty, such as armed conflict in the Middle East and the entry of China into the World Trade Organization. More subtle changes are also important, including the aging of the population in developed economies and the rise of a new middle class in emerging ones. This dynamic environment affects industries new and old, high tech and low tech, in manufacturing and services. Unanticipated opportunities emerge just as imitators neutralize existing competitive advantages.

The life of any business is finite.1 For companies to endure, the drive for efficiency must be combined with excellence in entrepreneurship. Through the process of strategic innovation, new businesses must emerge before old ones decay. As Ray Stata, chairman of Analog Devices Inc. (ADI), observes, “Everything has a life, and you always have to be looking beyond that life. The primary job of the CEO is to sense and respond … with the benefit of inputs from the organization … and to be an encouraging sponsor for those who see the future.”

Despite some commonalities, strategic innovation differs from technological or product innovation. New technologies do not always yield successful products, nor are new products always strategically significant. Furthermore, some companies, such as Southwest Airlines Co., succeed through innovative strategies alone — without much innovation in either the underlying technologies or the products and services sold to customers.

A strategic innovation is a creative and significant departure from historical practice in at least one of three areas.2 Those areas are design of the end-to-end value-chain architecture (for example, Dell Inc.’s direct-sales model); conceptualization of delivered customer value (IBM Corp.’s shift from selling hardware and software to selling complete solutions); and identification of potential customers (Canon Inc.’s pioneering focus on developing photocopiers for small offices rather than large corporations).

Strategic innovation involves exploring the unknown to create new knowledge and new possibilities. It proceeds with strategic experiments to test the viability of new business ideas.

The Learning Imperative

In hindsight, executives involved with strategic experiments would no doubt agree on this: If there is one thing you can expect, it is that your initial expectations are wrong.3 For example, when AT&T consulted McKinsey & Co. in the mid-1980s for advice on the cellular-telephone market, the company concluded that the worldwide potential was 900,000 units. Today, 900,000 new subscribers become mobile-phone users every three days.4 When information is scarce and the future unknowable, intelligent people may make poor judgments. The error magnitudes for market-potential estimates are often measured in multiples rather than percentages. Establishing an expenditure level that is even in the right ballpark is nearly impossible on the first go-round.

To improve initial expectations and resolve the many unknowns associated with any new business, management teams must learn.5 That learning must come through trial and error. The alternative — sufficient research, study and analysis to generate the perfect plan — is not practical for strategic experiments.

How does one learn by trial and error? Scientists have given us the scientific method: Design an experiment, predict outcomes on the basis of a hypothesis, measure outcomes, compare outcomes to predictions, and draw conclusions about the hypothesis based on the comparison. The last step is at the heart of the learning process.

In the ideal, scientific experiments meet five criteria: (1) results are available quickly, (2) results are unambiguous, (3) experiments can be isolated from outside influences, (4) experiments are inexpensive, and (5) they are repeatable. But strategic experiments are hardly ideal. They meet none of those criteria. Feedback may not be available for years, results are ambiguous, key variables cannot be isolated, and the experiments are too expensive to repeat.

This does not mean there is a better framework for ensuring timely learning, only that learning as strategic experiments proceed is difficult. Hence, many executives cultivate an experiment-and-learn attitude in themselves and among colleagues.

Still, lessons are not magically revealed, even to those with open minds. Learning requires conscious effort. It is an active pursuit, and the planning cycle provides the natural context for it. Alas, conventional planning approaches create barriers to learning.

The Conventional Planning Mind-Set

Understandably, executives view a rigorous financial-planning process as a crucial asset and are loath to alter it. A performance-oriented culture, one that holds people accountable for the numbers in the plans, is frequently touted as a hallmark of successful companies.6 Even corporations that give leaders of strategic experiments freedom to create entirely different organizations —with different leadership styles, hiring practices, values and operating assumptions — often insist that budgeting and performance reviews fall under the established planning system.

Although conventional planning systems do not create barriers to learning for all types of innovation, planning approaches can and should be altered within strategic experiments.7 The bedrock assumptions underlying conventional planning approaches do not apply. Historically, planning and control systems were designed to implement a proven strategy by ensuring accountability under the presumption of reliable predictability.8 Planning systems for strategic experiments, by contrast, should be designed to explore future strategies by supporting learning, given the unpleasant reality of reliable unpredictability.

The difference between those opposing mind-sets becomes clear in the evaluation of outcomes. The first step in evaluating an outcome is to compare it to the prediction made in the plan. Any disparity can be explained in one of two ways: either the strategy was improperly implemented or the prediction was wrong. If the former holds true, someone must be held accountable. But if the prediction was wrong, future expectations must be adjusted given the new information. An accountability mind-set is so ingrained in many corporations that disparities between predictions and outcomes are almost always attributed to management performance. The performance expectation (the prediction) is sacred.

In a mature business, that is reasonable. But a presumption of reliable predictability is not an appropriate premise for planning within strategic experiments. When the future is unknowable, the foremost planning objective must be learning, not accountability. Certainly, managers must be accountable, but on a more subjective basis. How quickly are they learning? How quickly are they responding to new information?

Despite reliable unpredictability, predictions must be made. Learning follows from the diligent analysis of disparities between predictions and outcomes, with specific attention to the stories, models or theories upon which the predictions are based. Theory-focused planning provides the needed structure for such analysis. It leads to improved theories and improved predictions — proof that learning is happening. Better predictions, in turn, lead to better choices about strategy and funding levels.

A conventional planning mind-set, however, can derail a strategic experiment. For example, Corning Microarray Technologies encountered several unexpected barriers to getting to market. No supplier could make DNA shipments in the necessary quantities with sufficient quality and reliability. In early trials, processes for manufacturing microarrays failed to meet quality and reliability standards generally accepted for Corning products.

That should have resulted in reconsideration of early choices about the manufacturing process and reevaluation of expectations. However, operating under the presumption of reliable predictability and within a culture that emphasized numbers, the general manager felt pressure to turn around a business he saw as underperforming. No time for reevaluation; only an urgency to work harder. Tensions escalated as the team failed to catch up. Finally, senior management stepped in, replaced several managers, reset expectations (of financial results, time to market and quality) and revisited basic questions about the approach to manufacturing microarrays.

Six Changes Make Theory-Focused Planning Work

Theory-focused planning requires six alterations to the conventional planning process. The first three changes relate to building a theory to make predictions (the forward-looking part of planning).

Change No. 1: Level of Detail

Instead of demanding a lot of detail, limit focus to a small number of critical unknowns.

In planning for an established business, incorporating details such as revenue breakdowns by product line or by region is useful. Fine-grained comparisons between predictions and outcomes can help isolate and resolve problems. But such detail is unrealistic for a strategic experiment. The unknowns are too great. Further, the lessons are not in the details but in a handful of critical unknowns that can make or break a business.

Critical unknowns generally fall into three categories: market, technology and cost unknowns. For example, there were many unknowns for ADI when, in the early 1990s, it pursued the commercialization of a new semiconductor technology, microelectro-mechanical machines (MEMS) — chips with tiny moving parts. However, three unknowns were clearly the most crucial:

  • Most critical market unknown: The most promising early application for MEMS was in new systems for launching automotive air bags. But would automakers risk a new approach?
  • Most critical technology unknown: Could MEMS be manufactured at levels of reliability sufficient for an automotive-safety application?
  • Most critical cost unknown: Could manufacturing yields be improved to levels consistent with other semiconductor manufacturing processes?

No amount of a priori analysis could resolve those unknowns, only experimenting and learning.

ADI’s conventional planning did not emphasize a small number of critical unknowns. Like most corporations’ planning, it focused on detailed projections of revenues, margins and profitability; planning discussions revolved around evaluations of those metrics. In spite of that, the critical unknowns were eventually resolved favorably, and today MEMS is profitable. Still, with a planning system that supported learning, the major uncertainties could have been resolved sooner, with fewer crises.

Change No. 2: Communication of Expectations

Instead of focusing on the predictions themselves, focus on the theory used to generate the predictions and the theory’s underlying assumptions.

Traditionally, predictions are recorded as numbers — usually precise ones. (More sophisticated plans for new ventures may include a range or perhaps a best-case, expected-case and worst-case scenario.) But in planning for a strategic experiment, the focus should be on the assumptions underlying the predictions, not on the predictions themselves. The most clearly communicated and detailed item in any plan for a strategic experiment should be a thorough description of the theory used to generate the predictions. Without a shared story about how a strategic experiment is expected to work, a management team cannot learn. Managers will not come to the same conclusions as new information is revealed.

Currently, the theory and its underlying assumptions are lost between the time when predictions are made and the time when those predictions are compared with outcomes, usually months later. The culprit is the ubiquitous spreadsheet. When you open a spreadsheet, you immediately see numbers — that is, the predictions themselves. To understand the logic behind those numbers, you would have to dig deep into the underlying equations. And after a few weeks, even the person who built the spreadsheet would find that difficult.

One approach to telling a story about how a business is expected to work is the influence or bubble-and-arrow diagram, which shows how multiple variables influence outcomes. (See “Drawing Influence Diagrams.”) The influence diagram should convey how each major category of spending — such as research, product development, manufacturing, marketing and sales —ultimately affects revenues. The most important spending categories to include are those directly related to the critical unknowns. If possible, each bubble on the diagram should represent something measurable. Thus, a framework is established for gathering evidence that confirms or contradicts each cause-and-effect relationship.

Drawing Influence Diagrams

View Exhibit

In 2001, Thomson Corp.’s Thomson Learning launched its own strategic experiment — Universitas 21 Global (U21G). Pursued in partnership with a worldwide consortium of universities, U21G ushered in a new era in higher education. U21G was conceived as a university with no campus and no classrooms. All operations were to be conducted completely online. When it opened in May 2003, U21G offered only an MBA degree and recruited from a few major Asian cities. But its leaders expect to add new programs and expand across the continent within a few years.

For U21G, faculty salaries will be a significant expense, and the effect of student-to-faculty ratio on student satisfaction in the online environment is a critical unknown. Theoretically, online learning offers the opportunity for a single faculty member to reach a wider audience. However, students may be more demanding of faculty than at a traditional university, seeking personal responses to e-mail on issues such as career advice or clarification of course concepts. What assumption can one make about adding extra faculty?

The relationship between the two factors is unknowable in advance. It cannot be extrapolated from experience at traditional institutions: It must be discovered. As the U21G provost commented, “We have a lot of experimentation to do … to offer online instruction in ways that allow us to have a higher student-to-faculty ratio without sacrificing quality. I cannot say what the student-to-faculty ratio will be. I can only speculate.” More is unknown than simply the appropriate student-to-faculty ratio to achieve high student satisfaction. The very nature of the relationship is unknown.9

An influence diagram can capture a basic hypothesis about the relationship, as well as a theory of how student satisfaction ultimately affects revenues. The theory can be stated as follows: Adding faculty reduces the student-to-faculty ratio, which increases student satisfaction, which enhances the perceived attractiveness of U21G in the market, which leads to higher enrollments and higher revenues. The diagram also can show how increases in other major budget categories related to critical unknowns might have an impact on revenues — for instance, how an increase in sales and marketing spending might increase perceived product attractiveness and therefore enrollments. (See “Predicting an Uncertain Future.”)

Predicting an Uncertain Future

View Exhibit

Change No. 3: Nature of Predictions

Instead of making specific numerical predictions for specific dates, predict the trends.

In a typical planning cycle, managers are asked to agree to a top-line number and a bottom-line number for the following year. For a strategic experiment, there is a better approach. Because any single-point prediction is certain to be wrong, and because new ventures are dynamic, it makes more sense to focus on trends. The rate and direction of change of a performance measure is usually a more important piece of information than its current value.

An easy way to incorporate the prediction of trends into plans is to supplement influence diagrams with trend graphs. Because such graphs represent many predictions over small intervals of time, they may appear to ask a great deal of planners. But the predictions do not require nearly the same level of accuracy as plans for a mature business. The shape of the curve is what is important. Simply choosing whether weeks, months or quarters is the right label for the x-axis (time) and estimating the magnitude of expected change (is a 10% change expected, a doubling, an increase by a factor of 10?) for the y-axis (the performance measure) is good enough. The purpose of graphing expected trends is to provide a quick warning if the actual trend is significantly different. If it is, say, a different direction or much faster or slower than expected, a change in strategy may be necessary.

To understand how combining influence diagrams with trend predictions results in a more complete theory, consider how U21G might have predicted the performance trends that could follow an increase in faculty. Clearly, an increase will immediately decrease student-to-faculty ratio. Beyond that, the supposition is that it will initially decrease student satisfaction — if new faculty struggle in the online environment for a while. It is the shape of the plot of actual outcomes over time, rather than any single student-satisfaction score, that will demonstrate if this worse- before-better hypothesis is correct. To evaluate the long-term impact of increased faculty, U21G would have to wait for the trend to play out. The remaining trend graphs, for perceived product attractiveness and enrollments, indicate a theory that the market reaction is not instantaneous — information about student satisfaction may be absorbed slowly by the market.

The second set of changes to traditional planning relate to testing the theory by comparing the predictions with actual outcomes (the evaluative part of planning).

Change No. 4: Frequency of Strategic Reviews

Instead of reviewing outcomes annually to reevaluate fundamental business assumptions, do so monthly — or more frequently as necessitated by new information.

In mature businesses, outcomes may be reviewed as often as weekly. However, such reviews are generally quick status checks to identify any variances that require immediate attention for getting back on plan. For most corporations, it is only during the major annual planning cycle that the strategy of the business is reconsidered. Between planning periods, management teams focus on execution.

If learning as quickly as possible is a primary goal in managing a new venture, the strategy itself — in particular, the critical unknowns highlighted on the influence diagram — must be reevaluated at least monthly. Leaders must be prepared to make major course changes at each review. To many, a monthly strategic review will seem onerous. But the time required for each review is much less than for the typical annual-planning exercise because it addresses only the critical unknowns.

More frequent strategic reviews would have been particularly helpful to a multinational corporation we will call Capston-White, which launched a venture to commercialize services for managing printing, imaging and copying assets within large organizations. After about two years, the management team decided that to be credible, the company needed a wide range of offerings, from maintenance to complex consulting services. Outside advisers confirmed the validity of the one-stop-shop strategy, and additional resources were committed.

Tremendous hiring followed, plus construction of a sophisticated IT system to support the expected growth. However, the most critical assumption — whether the market was really ready for expanded service — was not quickly tested. IT executives —the potential customers — claimed they were interested in managing their printing and imaging assets more sensibly, but in reality they had more pressing concerns. One executive associated with the venture explained: “If you asked CIOs in the late 1990s, they were concerned with two big things, the Y2K bug and the euro. Plus they were worried about getting a hot new Internet infrastructure up and running.” So the new service offerings did not attract customers as expected.

Nonetheless, driven by a culture of accountability to the plan and by an assumption of reliable predictability, the venture’s general manager kept investing heavily, expecting imminent growth despite all evidence to the contrary. The annual planning rhythm and the small size of the venture relative to the corporation caused the disappointing revenues to escape bold action from senior management for nearly two years. When executives finally made dramatic budget cuts and changes in leadership, the cost was much higher than it would have been with more frequent reviews.

Change No. 5: Perspective in Time

Instead of reviewing only current-period outcomes, consider the history of the strategic experiment in its entirety and look at trends over time.

If the format for predicting is a trend graph, then the same format for reporting outcomes must be used. But in many corporations, little previous history is considered during planning reviews. Often only the results from the most recent period are reported, along with year-to-date figures. If historical data are used at all, they go into a regression analysis to forecast revenues.

But lessons are embedded in history. Each performance measure identified on the influence diagram should be plotted over time. Updated plots should be regularly compared with predicted trends. In that way, rates of change are readily visible, and the shape of each plotted curve enhances intuition as predictions are updated. Companies can avoid the dangerous mind-set that one finance executive described: “With new ventures, you have to have a short memory, because you know you are going to fail a lot.”

Change No. 6: Nature of Measures

Instead of relying on a mix of financials and nonfinancials to measure outcomes, focus on leading indicators.

Traditional plans emphasize financial outcomes. But financial outcomes are highly ambiguous in new ventures — profitability, for example, is many years away, and precision about the magnitude of early losses is difficult. To learn as quickly as possible, plans for strategic experiments should emphasize leading indicators, which provide the first clues to whether the assumptions in the plan are realistic. (See “From Verbal Theory to Diagrams.”)

From Verbal Theory to Diagrams »

With an influence diagram, it is easy to identify the leading indicators: they are the measures closest to the bottom and closest to the bubbles for key budget categories. For example, the influence diagram for U21G indicates that student-to-faculty ratio and student satisfaction are leading indicators.

For New York Times Digital (NYTD), the online subsidiary of the New York Times Co., a critical unknown was the extent to which online readership would cannibalize subscriptions to the paper’s print version. Naturally, the possibility created tension between NYTD and the newspaper. To resolve the issue, NYTD conducted substantial research and discovered the unexpected. As one NYTD executive explained: “The Web opened up a whole new audience for discovery and sampling. Nobody comes on the Web and reads the whole paper in one sitting. It is a different kind of experience. So we were able to use the Web site as a vehicle to generate subscriptions to the newspaper.”

NYTD closely monitored a leading indicator of its contribution to the corporation’s overall performance: subscription gains and losses attributable to NYTD. Soon it was clear that gains outweighed losses. New readers from outside the New York metropolitan area were subscribing to the newspaper after sampling it online. Soon the Web site became the newspaper’s second most important source of new subscriptions.

Sailing Over the Edge of the Known World

Theory-focused planning is appropriate when more is unknown than known — when an industry is just emerging, no business model is established, and the uncertainties are so large that not even the basic nature of the relationships between activities and outcomes is clear. In this context, planning must support the objective of testing a strategy through experimentation. Reliable predictions are not possible.

Theory-focused planning represents a significant departure from conventional planning practices, starting with the idea that planning within strategic experiments must emphasize learning, not accountability. Unfortunately, corporations often become disciplined followers of planning protocols that do the opposite — they emphasize accountability over learning.

To establish a context for learning, theories that generate predictions must be explicitly shared, recorded and later revisited. Influence diagrams and performance-over-time graphs are two excellent tools that support the process. Additionally, learning is most likely to occur when the planning process focuses on critical unknowns, demands monthly strategic-change reviews, includes history going back to the venture’s inception, and emphasizes leading indicators.

Topics

References

1. The need to reinvent strategies during times of discontinuous change has been noted in C.K. Prahalad and G. Hamel, “Competing for the Future” (Boston: Harvard Business School Press, 1994); G. Hamel, “Strategy as Revolution,” Harvard Business Review 74 (July–August 1996): 69–82; W.C. Kim and R.A. Mauborgne, “Value Innovation: The Strategic Logic of High Growth,” Harvard Business Review 75 (January–February 1997): 103–112; and C.C. Markides, “All the Right Moves: A Guide To Crafting Breakthrough Strategy” (Boston: Harvard Business School Press, 1999).

2. This definition of strategic innovation is consistent with the perspective advanced by V. Govindarajan and A.K. Gupta, “Globalization in the Digital Age,” chap. 9 in “The Quest for Global Dominance: Transforming Global Presence Into Global Competitive Advantage” (San Francisco: Jossey-Bass, 2001); and C.K. Prahalad and G. Hamel, “Competing for the Future,” Harvard Business Review 72 (July–August 1994): 122–128.

3. This observation has been made by other researchers. For example, see C.M. Christensen, “Discovering New and Emerging Markets,” chap. 7 in “The Innovator’s Dilemma: When New Technologies Cause Great Firms To Fail” (New York: Harper Business, 1997); and Z. Block and I.C. MacMillan, “Developing the Business Plan,” chap. 7 in “Corporate Venturing: Creating New Businesses Within the Firm” (Boston: Harvard Business School Press, 1993).

4. See A. Wooldridge, “A Survey of Telecommunications,” Economist, Saturday, Oct. 9, 1999, p. 1; and “Cellphone Ownership Soars,” USA Today, Friday, Aug. 2, 2002, sec. A, 1A.

5. The study of whether and how individuals or organizations can learn from experience has a long tradition in the organizational-learning literature. See, for example, D.A. Levinthal and J.G. March, “The Myopia of Learning,” Strategic Management Journal 14 (winter 1993): 95–112; B. Levitt and J.G. March, “Organizational Learning,” Annual Review of Sociology 14 (1988): 319–340; J.E. Russo and P.J.H. Shoemaker, “The Personal Challenges of Learning,” chap. 8, and “Learning in Organizations,” chap. 9, in “Winning Decisions: Getting It Right the First Time” (New York: Doubleday, 2002). However, the subject of how control systems can be improved to support learning better has not received treatment in this literature.

6. See K.A. Merchant, “Rewarding Results: Motivating Profit Center Managers” (Boston: Harvard Business School Press, 1989); and J.A. Maciariello and C.J. Kirby, “Management Control Systems: Using Adaptive Systems To Attain Control” (New York: Pearson Education, 1994).

7. This notion has also been advanced by R.G. McGrath and I.C. MacMillan, “Discovery-Driven Planning,” Harvard Business Review 73 (July–August 1995): 44–54. Theory-focused planning is based on the same premise — that conventional planning is inappropriate when more is unknown than known. However, it differs in most particulars. The discovery-driven planning approach is appropriate when the industry being entered is established, the business model well known, and the uncertainties for the venture can be reduced to identifiable operational parameters. Theory-focused planning is appropriate when the industry is emerging, the business model is experimental, and the uncertainties so great that the basic nature of the relationships between activities and outcomes is unknown.

8. See, for example, R.N. Anthony and V. Govindarajan, “Management Control Systems,” 11th ed. (New York: McGraw-Hill, 2004), which focuses on the use of planning and control systems to implement (as opposed to test) strategies. Within this context, there have been several important developments in the field of management planning and control. One example is the value in combining financial measures (“outcome measures”) and nonfinancial measures (“performance drivers”) in evaluating the performance of managers, a development that goes as far back as the “measurement project” at General Electric Co. in the 1950s. See Anthony, “Management Control Systems,” 557–564. The notion of blending financial and nonfinancial measures in the context of implementing strategies has been refined by others. See, for example, J.K. Shank and V. Govindarajan, “Strategic Cost Management: The New Tool for Competitive Advantage” (New York: Free Press, 1993) for a development of the concept of “key success factors,” or R.S. Kaplan and D.P. Norton, “The Balanced Scorecard: Translating Strategy Into Action” (Boston: Harvard Business School Press, 1996). Our objective in this article is to redefine planning and control for a different purpose — testing a highly uncertain strategy through experimentation and learning, when a priori predictions of the future are not possible.

9. Again, refer to McGrath and MacMillan’s concept of discovery-driven planning (DDP). In this example, DDP would be appropriate if the question were whether the necessary student-to-faculty ratio is 10:1. But for Universitas 21 Global, the question is much more fundamental: To what extent does student-to-faculty ratio have an impact on student satisfaction? Theory-focused planning is designed to facilitate resolution of this type of unknown.

Acknowledgments

Support for this project was provided by the Center for Global Leadership. The authors also gratefully acknowledge comments from Arvind Bhambri, Guy Hocker and Anant Sundaram.

Reprint #:

45212

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.