Fall 1996
Edgar H. Schein

Three Cultures of Management: The Key to Organizational Learning

Vol. 38, No. 1Reprint #3811sloanreview.mit.edu

Three Cultures of Management: The Key to Organizational Learning

Why do organizations fail to learn how to learn and therefore remain competitively marginal? In this article, I try to explain why organizational innovations either don’t occur or fail to survive and proliferate. Some typical explanations revolve around vague concepts of “resistance to change,” or “human nature,” or failures of “leadership.” I propose a more fundamental reason for such learning failures, derived from the fact that, in every organization, there are three particular cultures among its subcultures, two of which have their roots outside the organization and are therefore more fundamentally entrenched in their particular assumptions. Every organization develops an internal culture based on its operational success, what I call the “operator culture.” But every organization also has, in its various functions, the designers and technocrats who drive the core technologies. I call this the “engineering culture”; their fundamental reference group is their worldwide occupational community. Every organization also has its executive management, the CEO and his or her immediate subordinates — what I call the “executive culture.” CEOs, because of the nature of their jobs and the structure of the capital markets, also constitute a worldwide occupational community in the sense that they have common problems that are unique to their roles.

These three cultures are often not aligned with each other, and it is this lack of alignment that causes the failures of organizational learning that I will discuss. The question is whether we have misconceived the initial problem by focusing on organizational learning, when, in fact, it is the executive and engineering communities that must begin their own learning process if we are to meet the challenges of the twenty-first century.

Organizations Don’t Learn; Innovations Don’t Last or Diffuse

The ability to create new organizational forms and processes, to innovate in both the technical and organizational arenas, is crucial to remaining competitive in an increasingly turbulent world. But this kind of organizational learning requires not only the invention of new forms but also their adoption and diffusion to the other relevant parts of the organization and to other organizations in a given industry. Organizations still have not learned how to manage that process. The examples of successful organizational learning we have seen either tend to be short-run adaptive learning — doing better at what we are already doing — or, if they are genuine innovations, tend to be isolated and eventually subverted and abandoned.

For example, a new product development team in a large auto company worked with the MIT Organizational Learning Center to develop a capacity for learning. By using various techniques derived from “action science,” systems dynamics, and organization development, the team created high levels of openness between hierarchical levels and increased communication and trust among its members.1 This openness and trust permitted team members to reveal engineering design problems as they arose instead of waiting until they had solutions, as prior tradition in this company had dictated.2

Early identification of those problems was crucial in order to avoid later interactive effects that would require costly, complex redesigns. For example, changing the chassis design might increase weight, which might require a different tire design, which, in turn, might cause more internal noise, and so on. By revealing such problems early, the team could view the whole car more systemically and could therefore speed up redesign.

However, the pileup of early problems caused upper-level managers to make a false attribution. They considered the team to be “out of control” and ordered it to get itself back under control. The team realized that higher management did not understand the value of early problem identification and continued to use its new learning, assuming that the ultimate results would speak for themselves. The team was able to complete the design well ahead of schedule and with considerably lower costs, but, contrary to expectations, higher managers never understood the reasons for these notable results nor gave the team credit for having learned a new way of solving problems. Instead, higher managers gave themselves credit for having gotten the team “under control.” They did not consider the team to be particularly innovative and disbanded it. They subsequently encouraged several of its members and leaders to take early retirement as part of the company’s general downsizing program.

In another example, an insurance company decided to move toward the paperless office.3 Top management hired a manager to implement the new system, mandated a schedule, and provided whatever resources the manager needed to accomplish the task. In order to use the new system, employees had to learn complex new computer routines to replace their familiar work with paper. Because the company was also under financial pressure, it had instituted a number of productivity programs that caused line managers to insist that all the daily work continue to be performed even while the learning of the new system was supposed to take place. The new manager was equally insistent that the system be implemented on schedule, causing employees to short-circuit certain routines, to learn only the rudiments of the new system, and even to misrepresent the degree to which they were now working without paper.

The new manager, based on partial and incorrect information, declared that the system was implemented “on schedule” and was given public credit for this achievement. However, the result was that the employees did not learn the new system well enough to make it more productive than the old paper system. In fact, productivity was lower with the new system because it was so imperfectly implemented.

In a third example, a company decided to introduce automatic machine tools into its production process.4 The idea originated with the engineers who saw an opportunity to do some “real” engineering. The engineers and the vendors developed a proposal based on technical elegance but found that middle management would not push the proposal up to executive management unless it was rewritten to show how it would reduce costs by cutting labor. No accurate figures were available, so the team more or less invented the numbers to justify the purchase of the expensive new machines.

As the proposal worked its way up the hierarchy, the labor union got wind of the project and insisted that it would not go along unless management guaranteed that no jobs would be lost and that all the present operators would be retrained. This not only delayed the project, but, when the machines were finally installed, the production process proved to be much less effective and much more costly than had been promised in the proposal. The engineers were highly disappointed that their elegant solution had, from their point of view, been subverted and that all the operators that were to have been replaced had merely been retrained and kept on jobs that the engineers considered superfluous.

Beyond these three specific cases, the history of organizational development, change, innovation, and learning shows over and over that certain lessons seem not to take hold. Since the Hawthorne studies of the 1920s, it has been recognized that employee involvement increased both productivity and motivation. Lewin, Argyris, McGregor, Likert, and many others showed how managers who treated people as adults, who involved them appropriately in the tasks that they were accountable for, and who created conditions so employees could obtain good feedback and monitor their own performance were more effective than those who did not.5

Programs such as the National Training Labs’ sensitivity training groups and Blake’s managerial grid were, for several decades, touted as the solution to all our productivity problems, just as the human relations and participatory management programs of the forties had promised.6 Yet these and other similar programs have come and gone, and it is not at all clear what organizations learned from them or why these innovations have disappeared, only to be reinvented under new labels such as empowerment, self-managed groups, and servant leadership.

The lesson of these and similar cases is complicated. On the one hand, we can say that this is just normal life in organizations. It is just politics or just human nature. Or we can say that these projects and programs were mismanaged, by either the project teams or the executive managers above them. Or we can say that all these human-relations-oriented programs were misguided in the first place. However, I have begun to see deeper phenomena at work here.

The deeper issue is that in most organizations, there are three different major occupational cultures that do not really understand each other very well and that often work at cross-purposes. These cultures cut across organizations and are based on what have been described as “occupational communities.”7

The Concept of Culture and Occupational Communities

A culture is a set of basic tacit assumptions about how the world is and ought to be that a group of people share and that determines their perceptions, thoughts, feelings, and, to some degree, their overt behavior.8 Culture manifests itself at three levels: the level of deep tacit assumptions that are the essence of the culture, the level of espoused values that often reflect what a group wishes ideally to be and the way it wants to present itself publicly, and the day-to-day behavior that represents a complex compromise among the espoused values, the deeper assumptions, and the immediate requirements of the situation. Overt behavior alone cannot be used to decipher culture because situational contingencies often make us behave in a manner that is inconsistent with our deeper values and assumptions. For this reason, one often sees “inconsistencies” or “conflicts” in overt behavior or between behavior and espoused values. To discover the basic elements of a culture, one must either observe behavior for a very long time or get directly at the underlying values and assumptions that drive the perceptions and thoughts of the group members.

For example, many organizations espouse “team-work” and “cooperation,” but the behavior that the incentive and control systems of the organization reward and encourage is based more on a shared tacit assumption that only individuals can be accountable and that the best results come from a system of individual competition and rewards. If the external situation demands teamwork, the group will develop some behavior that looks, on the surface, like teamwork by conducting meetings and seeking consensus, but members will continue to share the belief that they can get ahead by individual effort and will act accordingly when rewards are given out. I have heard many executives tell their subordinates that they expect them to act as a team but remind them in the same sentence that they are all competing for the boss’s job!

Cultures and Subcultures

Cultures arise within organizations based on their own histories and experiences. Starting with the founders, those members of an organization who have shared in its successful growth have developed assumptions about the world and how to succeed in it, and have taught those assumptions to new members of the organization.9 Thus IBM, Hewlett-Packard, Ford, and any other company that has had several decades of success will have an organizational culture that drives how its members think, feel, and act.

Shared assumptions also typically form around the functional units of the organization. They are often based on members’ similar educational backgrounds or similar organizational experiences, what we often end up calling “stove pipes” or “silos.” We all know that getting cross-functional project teams to work well together is difficult because the members bring their functional cultures into the project and, as a consequence, have difficulty communicating with each other, reaching consensus, and implementing decisions effectively. The difficulty of communication across these boundaries arises not only from the fact that the functional groups have different goals, but also from the more fundamental issue that the very meaning of the words they use will differ. The word “marketing” will mean product development to the engineer, studying customers through market research to the product manager, merchandising to the salesperson, and constant change in design to the manufacturing manager. When they try to work together, they will often attribute disagreement to personalities and fail to notice the deeper, shared assumptions that color how each function thinks.

Another kind of subculture, less often acknowledged, reflects the common experiences of given levels within a hierarchy. Culture arises through shared experiences of success. If first-line supervisors discover ways of managing their subordinates that are consistently successful, they gradually build up shared assumptions about how to do their job that can be thought of as the “culture of first-line supervision.” In the same way, middle management and higher levels will develop their own shared assumptions and, at each level, will teach those assumptions to newcomers as they get promoted. These hierarchically based cultures create the communication problems associated with “selling senior management on a new way of doing things,” or “getting budget approval for a new piece of equipment,” or “getting a personnel requisition through.” As each cultural boundary is crossed, the proposal has to be put into the appropriate language for the next higher level and has to reflect the values and assumptions of that level. Or, from the viewpoint of the higher levels, decisions have to be put into a form that lower levels can understand, often resulting in “translations” that actually distort and sometimes even subvert what the higher levels wanted.

So far, I have focused on the cultures that arise within organizations from the unique experiences of its members. But “occupational communities” also generate cultures that cut across organizations.10 For example, fishermen around the world develop similar world-views, as do miners, as do the members of a particular industry based on a particular technology. In these cases, the shared assumptions derive from a common educational background, the requirements of a given occupation such as the licenses that have to be obtained to practice, and the shared contact with others in the occupation. The various functional cultures in organizations are, in fact, partly the result of membership in broader cross-organizational occupational communities. Salespeople the world over, accountants, assembly line workers, and engineers share some tacit assumptions about the nature of their work regardless of who their particular employer is at any given time.

Such similar outlooks across organizations also apply to executive managers, particularly CEOs. CEOs face similar problems in all organizations and in all industries throughout the world. Because executives are likely to have, somewhere in their history, some common education and indoctrination, they form a common worldview — common assumptions about the nature of business and what it takes to run a business successfully.

Three Cultures of Management

The learning problems that I have identified can be directly related to the lack of alignment among three cultures, two of which are based on occupational communities — (1) the culture of engineering, (2) the culture of CEOs, and (3) the culture of operators — and the shared assumptions that arise in the “line units” of a given organization as it attempts to operate efficiently and safely. To understand how these three cultures interact, let us examine their shared assumptions.

The Operator Culture

The culture of operators is the most difficult to describe because it evolves locally in organizations and within operational units (see the sidebar). Thus we can identify an operator culture in the nuclear plant, the chemical complex, the auto manufacturing plant, the airplane cockpit, and the office, but it is not clear what elements make this culture broader than the local unit. To focus on this issue, we must consider that the operations in different industries reflect the broad technological trends in those industries. At some fundamental level, how one does things in a given industry reflects the core technologies that created that industry. And, as those core technologies themselves evolve, the nature of operations changes. For example, as Zuboff has persuasively argued, information technology has made manual labor obsolete in many industries and replaced it with conceptual tasks.11 In a chemical plant, the worker no longer walks around observing, smelling, touching, and manipulating. Instead he or she sits in a control room and infers the conditions in the plant from the various indexes that come up on the computer screen.

Assumptions of the Operator Culture »

The operator culture is based on human interaction, and most line units learn that high levels of communication, trust, and teamwork are essential to getting the work done efficiently. Operators also learn that no matter how clearly the rules are specified as to what is supposed to be done under different operational conditions, the world is to some degree unpredictable and one must be prepared to use one’s own innovative skills. If the operations are complex, as in a nuclear plant, operators learn that they are highly interdependent and must work together as a team, especially when dealing with unanticipated events. Rules and hierarchy often get in the way in unpredicted conditions. Operators become highly sensitive to the degree to which the production process is a system of interdependent functions, all of which must work together to be efficient and effective. These points apply to all kinds of “production processes,” whether a sales function, a clerical group, a cockpit, or a service unit.

The tragedy of most organizations is that the operators know that, to get the job done effectively, they must adhere to the assumptions stated above, but that neither the incentive system nor the day-to-day management system may support those assumptions. Operators thus learn to subvert what they know to be true and “work to rule,” or use their learning ability to thwart management’s efforts to improve productivity. To understand why this happens, we must examine how two other major cultures operate in organizations.

The Engineering Culture

In all organizations, one group represents the basic design elements of the technology underlying the work of the organization and has the knowledge of how that technology is to be utilized. This occupational community cuts across nations and industries and can best be labeled the “engineering culture.”12 A colleague who works for a company driven by the engineering culture told me that in the parking lot of his company, signs say, “Maximum Speed Limit: 5.8 Miles Per Hour.” Although this culture is most visible in traditional engineering functions, it is also evident among the designers and implementers of all kinds of technologies —information technology, market research, financial systems, and so on. The shared assumptions of this community are based on common education, work experience, and job requirements (see the sidebar).

Assumptions of the Engineering Culture »

Engineers and technocrats of all persuasions are attracted to engineering because it is abstract and impersonal. Their education reinforces the view that problems have abstract solutions and that those solutions can, in principle, be implemented in the real world with products and systems free of human foibles and errors. Engineers, and I use this term in the broadest sense, are designers of products and systems that have utility, elegance, permanence, efficiency, safety, and maybe, as in the case of architecture, even aesthetic appeal, but they are basically designed to require standard responses from their human operators, or, ideally, to have no human operators at all.

In the design of complex systems such as jet aircraft or nuclear plants, the engineer prefers a technical routine to ensure safety rather than relying on a human team to manage the possible contingencies. Engineers recognize the human factor and design for it, but their preference is to make things as automatic as possible. Safety is built into the designs themselves. When I asked an Egyptian Airlines pilot whether he preferred Russian or U.S. planes, he answered immediately that he liked the U.S. planes because the Russian planes have only one or two back-up systems, while the U.S. planes have three back-up systems. In a similar vein, during a landing at the Seattle airport, I overheard two engineers saying to each other that the cockpit crew was totally unnecessary. A computer could easily fly and land the plane.

In other words, a key theme in the culture of engineering is the preoccupation with designing humans out of the systems rather than into them. For example, the San Francisco Bay Area Rapid Transit (BART) uses totally automated trains. But the customers, not the operators, objected to this degree of automation, forcing management to put human operators on each train even though they had nothing to do except to reassure people by their presence.

In the earlier example of the company introducing automated machines tools into production processes, the engineers were very disappointed that the operations of the elegant machine they were purchasing would be constrained by the presence of more operators than necessary, by a costly retraining program, and by management-imposed policies that had nothing to do with “real engineering.” In my own research on information technology, I found that engineers fundamentally wanted the operators to adjust to the language and characteristics of the particular computer system being implemented and were quite impatient with the operators’ “resistance to change.” From the viewpoint of the users — the operators — not only was the language arcane, but they did not consider the systems useful for solving the operational problems.13

Both operators and engineers often find themselves out of alignment with a third critical culture, the culture of executives.

The Executive Culture

The “executive culture” is the set of tacit assumptions that CEOs and their immediate subordinates share worldwide. This executive worldview is built around the necessity to maintain an organization’s financial health and is preoccupied with boards, investors, and the capital markets. Executives may have other preoccupations, but they cannot get away from having to worry about and manage the financial survival and growth of their organization.14 (For the assumptions of the executive culture, see the sidebar.)

Assumptions of the Executive Culture »

What I have identified as the executive culture applies particularly to CEOs who have risen through the ranks and been promoted to their jobs. Founders of organizations or family members appointed to these levels have different assumptions and often have a broader focus.15 The promoted CEO, especially, adopts the exclusively financial viewpoint because of the nature of the executive career. As managers rise in the hierarchy, as their level of responsibility and accountability grows, they not only have to become more preoccupied with financial matters, but also find that it becomes harder to observe and influence the basic work of the organization. They discover that they have to manage from afar, and that discovery inevitably forces them to think in terms of control systems and routines that become increasingly impersonal. Because accountability is always centralized and flows to the top of organizations, executives feel an increasing need to know what is going on, while recognizing that it is harder to get reliable information. That need for information and control drives them to develop elaborate information systems alongside the control systems and to feel increasingly alone in their position atop the hierarchy.

Paradoxically, throughout their careers, managers have to deal with people and recognize intellectually that people ultimately make the organization run. First-line supervisors, especially, know very well how dependent they are on people. However, as managers rise in the hierarchy, two factors cause them to become more “impersonal.” First, they become increasingly aware that they are no longer managing operators, but other managers who think like they do, thus making it not only possible but also likely that their thought patterns and worldview will increasingly diverge from the worldview of the operators. Second, as they rise, the units they manage grow larger and larger until it becomes impossible to personally know everyone who works for them. At some point, they recognize that they cannot manage all the people directly and, therefore, have to develop systems, routines, and rules to manage “the organization.” They increasingly see people as “human resources” to be treated as a cost rather than a capital investment.

The executive culture, thus, has in common with the engineering culture a predilection to see people as impersonal resources that generate problems rather than solutions. In other words, both the executive culture and the engineering culture view people and relationships as means to the end of efficiency and productivity, not as ends in themselves. If we must have human operators, so be it, but let’s minimize their possible impact on the operations and their cost to the enterprise.

Dysfunctional Interactions among the Three Cultures

In many industries, there is enough initial alignment among the needs of the task as defined by the operators, the needs of the engineers for reliable and efficient operations, and the needs of the executives for minimizing costs and maximizing profits so that there are no problems. But when organizations attempt to learn in a generative way, when they attempt to reinvent themselves because the technologies and environmental conditions have changed drastically, these three cultures collide, and we see frustration, low productivity, and the failure of innovations to survive and diffuse.

For example, in their research on nuclear plants, Carroll and Perin found that plant operators understood very well the interdependencies and interactions of all the systems.16 They lived in an environment that had its own ecology in which interdependence was visible and in which the management of interdependencies through teamwork was crucial to safety and productivity. But one or two levels above the plant, management saw only specific technical and financial issues, driven very much by the outside forces of the Nuclear Regulatory Agency and their own worldview as executives, a view that could best be described as a “machine bureaucracy,” while the operators’ world-view could better be described as a “sociotechnical system.”

The plants were different in how they operated, but each developed its own concept of how to improve its operations. Such improvement plans often required additional allocations of money for training and plant redesign, and also often required bending some formal rules and procedures mandated by the industry and the government. When such requirements were articulated, the engineering community focused primarily on finding standard solutions to problems, preferably solutions free of human intervention, and executive management focused primarily on money and cost control. The lack of alignment among the three cultures often led to inaction and the continuation of practices that were viewed as less efficient or effective.

In some situations, like that in an airplane cockpit, the executive and operator cultures can collide in a drastically dysfunctional way. Blake’s research has shown that some airline crashes are due to communication failures in the cockpit resulting from obsession with rank and hierarchy.17 For example, in one crash a few miles short of the runway, the flight recorder revealed that the flight engineer had shouted for several minutes that they were running out of gas, while the pilot, functioning as the CEO, continued to circle and tried to fix a problem with the landing gear. When this situation was run in a simulator, the same phenomenon occurred; the pilot was so busy with his operational task and so comfortable in his hierarchical executive position that he literally did not hear critical information that the flight engineer shouted at him. Only when the person doing the shouting was a fellow pilot of equal or higher rank did the pilot pay attention to the information. In other words, the hierarchy got in the way of solving the problem. The engineering solution of providing more warning lights or sounds would not have solved the problem either, because the pilot could easily rationalize them as computer or signal malfunctions.

At the boundary between the engineering and executive cultures, other conflicts and problems of communication arise. In my research on executive views of information technology (IT) contrasted with the views of IT specialists with an engineering mentality, the IT specialists saw information as discrete, packageable, and electronically transmittable, while executives saw information as holistic, complex, imprecise, and dynamic.18 Whereas the IT specialist saw networking as a way of eliminating hierarchy, executives saw hierarchy as intrinsic to organizational control and coordination. Whereas IT specialists saw the computer and expert systems as the way to improve management decision making, executives saw the computer as limiting and distorting thinking by focusing only on the kinds of information that can be packaged and electronically transmitted. And if executives did buy into IT implementations for reasons of cost reduction and productivity, they often mandated it in a way that made it difficult for the operators to learn to use the systems effectively because insufficient time and resources were devoted to the relearning process itself, as the earlier insurance company example showed.

Of course, the way in which technology is used is influenced by the values and goals imposed by the executive culture, as some of my examples have shown. And those values are sometimes more stable than the technological possibilities, causing technologies like information technology to be underutilized from the viewpoint of the engineering culture.19 In the earlier example, the engineers were thwarted by the executive culture, and the solution that resulted from union pressure reflected the executives’ short-run financial fears.

The lack of alignment among the executive, engineering, and operator cultures can be seen in other industries such as health care in which the needs of the primary care physicians (the operators) to do health maintenance and illness prevention conflicts with the engineering desire to save life at all costs and the executive desire to minimize costs no matter how this might constrain either the engineers or the operators.

In education, the same conflicts occur between teachers who value the human interaction with students and the proponents of sophisticated computerized educational systems on the one hand and the cost constraints imposed by school administrators on the other hand. If the engineers win, money is spent on computers and technologically sophisticated classrooms. If the administrators win, classes become larger and undermine the classroom climate. In either case, the operators — the teachers — lose out, and human innovations in learning are lost.

Implications of the Three Cultures

There are several important points to note about the three cultures. First, the executive and engineering cultures are worldwide occupational communities that have developed a common worldview based on their education, their shared common technology, and their work experience. This means that even if an executive or engineer in a given organization learns to think like an operator and becomes more aligned with the operator culture, his or her eventual replacement will most probably return the organization to where it was. The field of organization development is replete with examples of innovative new programs that did not survive executive succession. In other words, the executive’s or the engineer’s reference group is often outside the organization in his or her peer group, whose definition of “best practice” may differ sharply from what is accepted inside the organization. Executives and engineers learn more from each other than from their subordinates.

Second, each of the three cultures is “valid” from its viewpoint, in the sense of doing what it is supposed to. Executives are supposed to worry about the financial health of their organization, and engineers are supposed to innovate toward the most creative people-free solutions. To create alignment among the three cultures, then, is not a case of deciding which one has the right viewpoint, but of creating enough mutual understanding among them to evolve solutions that will be understood and implemented. Too often in today’s organizational world, either the operators assume that the executives and engineers don’t understand, so they resist and covertly do things their own way, or executives and/or engineers assume that they need to control the operators more tightly and force them to follow policies and procedure manuals. In either case, effectiveness and efficiency will suffer because there is no common plan that everyone can understand and commit to.

Third, both the executive and engineering cultures are primarily task focused and operate on the implicit assumption that people are the problem, either as costs or as sources of error. In the case of the engineers, the assumption is already implicit in their education and training. The ultimately elegant solution is one that always works and works automatically, in other words, without human intervention. In the case of the executives, the situation is more complex. Either executives have come from the engineering culture where people were not important in the first place, or they learned as they were promoted and began to feel responsible for hundreds of people that they had to think in terms of systems, routines, rules, and abstract processes for organizing, motivating, and controlling. And as they became chief executives accountable to the financial markets and their stockholders, they learned to focus more and more on the financial aspects of the organization. The gradual depersonalization of the organization and the perception that employees are mostly a cost instead of a capital investment is thus a learned occupational response.

It is not an accident that chief executives tend to band together and form their own culture because they come to believe that no one except another chief executive really understands the lonely warrior role. With that sense of aloneness come related assumptions about the difficulty of obtaining valid information and the difficulty of ensuring that subordinates down the line will understand and implement what they are asked to do, leading ultimately to fantasies of spying on their own organizations like the Caliph of Baghdad who donned beggar’s clothes to mingle among the people and find out what they were really thinking. Even though the CEO’s immediate subordinates are humans, increasingly the chief executive sees them as part of a larger system that must be managed impersonally by systems and rules. CEOs often feel strongly about not fraternizing with subordinates because, if the organization gets into trouble, those subordinates are often the first to be sacrificed as evidence of “fixing” things.

Fourth, the engineering and executive cultures may agree on the assumption that people are a problem, but they disagree completely on how to make organizations work more effectively. Executives recognize that their world is one of imperfect information, of constant change, and of short-run coping while attempting to maintain a strategic focus. Engineers seek elegant permanent solutions that are guaranteed to work and be safe under all circumstances and, therefore, typically produce solutions that cost much more than the executives believe they can afford. So the executives and the engineers constantly battle about how good is good enough and how to keep costs down enough to remain competitive.

What is most problematic is that we have come to accept the conflict between engineering and management as “normal,” leading members of each culture to devalue the concerns of the other rather than looking for integrative solutions that will benefit both. A few creative companies have sent engineers to talk to customers directly to acquaint them with business realities and customer needs. Some executives aware of this conflict involve themselves from time to time in operations and product development so they do not lose touch with the realities and strengths of the other cultures. But this kind of remedy deals only with the organizational level. The dilemma of twenty-first century learning is broader.

The Dilemma of Twenty-First Century Learning

Organizations will not learn effectively until they recognize and confront the implications of the three occupational cultures. Until executives, engineers, and operators discover that they use different languages and make different assumptions about what is important, and until they learn to treat the other cultures as valid and normal, organizational learning efforts will continue to fail. Powerful innovations at the operator level will be ignored, subverted, or actually punished; technologies will be grossly underutilized; angry employees will rail against the impersonal programs of reengineering and downsizing; frustrated executives who know what they want to accomplish will feel impotent in pushing their ideas through complex human systems; and frustrated academics will wonder why certain ideas like employee involvement, sociotechnical systems analyses, high-commitment organizations, and concepts of social responsibility continue to be ignored, only to be reinvented under some other label a few decades later.

First, we must take the concept of culture more seriously than we have. Instead of superficially manipulating a few priorities and calling that “culture change,” we must recognize and accept how deeply embedded the shared, tacit assumptions of executives, engineers, and employees are. We have lived in this industrial system for more than a century and have developed these assumptions as an effective way to deal with our problems. Each culture can justify itself historically, and each has contributed to the success of the industrial system that has evolved.

Second, we must acknowledge that a consequence of technological complexity, globalism, and universal transparency is that some of the old assumptions no longer work. Neither the executives nor the engineers alone can solve the problems that a complex socio-technical system like a nuclear plant generates. We must find ways to communicate across the cultural boundaries, first, by establishing some communication that stimulates mutual understanding rather than mutual blame.

Third, we must create such communication by learning how to conduct cross-cultural “dialogues.” Recently, the concept of “dialogue” has substantially improved our understanding of human thought and communication and promises to make some understanding across cultural boundaries possible.20 If people from the different cultures will sit in a room together, which is hard enough, they must reflectively listen to themselves and to each other, which is even harder. Fortunately, the understanding of what it takes to create effective dialogues is itself coming to be better understood.

The engineering and executive cultures I have described are not new. What is new is that the operator culture in all industries has become much more complex and interdependent, which has thrown it more out of alignment with the other two cultures. The implication is that each community will have to learn how to learn and evolve some new assumptions. We have directed our efforts primarily at the operational levels of organizations and viewed the executive and engineering cultures as problems or obstructions, partly because they do not sufficiently consider the human factor. Yet these cultures have evolved and survived and have strengths as well as weaknesses.

The key to organizational learning may be in helping executives and engineers learn how to learn, how to analyze their own cultures, and how to evolve those cultures around their strengths. These communities may learn in different ways, and we will have to develop appropriate learning tools for each community. Learning may have to be structured along industry lines through consortia of learners rather than along individual organizational lines.21 And business and engineering education itself will have to examine whether the assumptions of academics are evolving at a sufficient rate to deal with current realities.

We are a long way from having solved the problems of organizational learning, but thinking about occupational communities and the cultures of management will begin to structure these problems so that solutions for the twenty-first century will be found.

References

1. C. Argyris, R. Putnam, and D. Smith, Action Science (San Francisco: Jossey-Bass, 1985);

P. Senge, The Fifth Discipline (New York: Doubleday, 1990); and

R. Beckhard and R.T. Harris, Organizational Transitions: Managing Complex Change, 2nd ed. (Reading, Massachusetts: Addison-Wesley, 1987).

2. G.L. Roth and A. Kleiner, “The Learning Initiative at the Auto Company Epsilon Program” (Cambridge, Massachusetts: MIT Organizational Learning Center, working paper 18.005, 1996).

3. G.L. Roth, “In Search of the Paperless Office” (Cambridge, Massachusetts: MIT, Ph.D. dissertation, 1993).

4. R.J. Thomas, What Machines Can’t Do (Berkeley, California: University of California Press, 1994).

5. D.M. McGregor, The Human Side of Enterprise (New York: McGraw-Hill, 1960).

6. E.H. Schein and W.G. Bennis, Personal and Organizational Change through Group Methods: The Laboratory Approach (New York: John Wiley, 1965); and

R.R. Blake, J.S. Mouton, and A.A. McCanse, Change by Design (Reading, Massachusetts: Addison-Wesley, 1989).

7. J. Van Maanen and S.R. Barley, “Occupational Communities: Culture and Control in Organizations,” in B.M. Staw and L.L. Cummings, eds., Research in Organizational Behavior, vol. 6 (Greenwich, Connecticut: JAI Press, 1984).

8. E.H. Schein, Organizational Culture and Leadership, second edition (San Francisco: Jossey-Bass, 1992a).

9. E.H. Schein, “The Role of the Founder in the Creation of Organizational Culture” Organizational Dynamics, Summer 1983, pp. 13–28.

10. Van Maanen and Barley (1984).

11. S. Zuboff, In the Age of the Smart Machine: The Future of Work (New York: Basic Books, 1988).

12. G. Kunda, Engineering Culture: Control and Commitment in a High-Tech Corporation (Philadelphia: Temple University Press, 1992).

13. E.H. Schein, “The Role of the CEO in the Management of Change: The Case of Information Technology,” in T.A. Kochan and M. Useem, eds., Transforming Organizations (New York: Oxford University Press, 1992b).

14. G. Donaldson and J.W. Lorsch, Decision Making at the Top (New York: Basic Books, 1983).

15. Schein (1983).

16. J. Carroll and C. Perin, “Organizing and Managing for Safe Production: New Frameworks, New Questions, New Actions” (Cambridge, Massachusetts: MIT Center for Energy and Environmental Policy Research, Report NSP 95-005, 1995).

17. Blake et al. (1989).

18. Schein (1992a, 1992b).

19. L. Thurow, The Future of Capitalism (New York: William Morrow, 1996).

20. W.N. Isaacs, “Taking Flight: Dialogue, Collective Thinking, and Organizational Learning,” Organizational Dynamics, Winter 1993, pp. 24–39; and

E.H. Schein, “On Dialogue, Culture, and Organizational Learning,” Organizational Dynamics, Winter 1993, pp. 40–51.

21. E.H. Schein, “Building the Learning Consortium” (Cambridge, Massachusetts: MIT Organizational Learning Center, working paper 10.005, 1995).

Articles published in MIT Sloan Management Review are copyrighted by the Massachusetts Institute of Technology unless otherwise specified at the end of an article.

MIT Sloan Management Review articles, permissions, and back issues can be purchased on our website: shop.sloanreview.mit.edu, or you may order through our Business Service Center (9 a.m.-5 p.m. ET) at the phone number listed below.

To reproduce or transmit one or more MIT Sloan Management Review articles requires written permission.

To request permission, use our website
shop.sloanreview.mit.edu/store/faq,
email smr-help@mit.edu or call 617-253-7170.