Designing Organizations Around Technology

Harvard Business School professor Carliss Baldwin explains how modularity affects team structures.

Reading Time: 16 min 

Topics

Digital Leadership

As organizations rely increasingly on digital technologies, how should they cultivate opportunities and address taking risks in a fast-moving digital market environment?
More in this series
MIT Sloan Management Review: When I saw you give a talk, you made the provocative statement that basically organizations are “designed around the technology of the day.” Can you tell me a little bit more, or if my memory needs refreshing, refresh that phrase?

Carliss Baldwin: That is my thesis. I have to skirt around the accusation of technological determinism. I don’t think technology is destiny, but I do think that companies that understand the requirements and opportunities that a technology brings will be rewarded in a free market. Today we are seeing the emergence of new types of organizations in response to new technologies.

That’s a pretty bold statement, that organizations are going to be designed or built around technologies. What evidence leads you to that thesis or that insight?

Theoretically, I can model the outlines of the technology and derive properties that are value-enhancing in the context of that model. There’s also empirical evidence — changes in industry structure in response to new technology, and the emergence of new organizational forms. There’s a lot of talk about platforms and ecosystems: This organizational form is not new to the universe, but it is newly important to our economy. I would argue that these new structures are a response to the physical properties of digital technology.

You’re saying the new platform organizations — and the new focus on ecosystems — are a result of the dominant technology of today. Is that correct?

Yes. The emergence of a group of technologies is based in information, both hardware and software technologies. In particular, the modularization of systems and the rise of platform organizations such as Intel, Microsoft, Google, Amazon.com, and others are important new developments that are very much tied to Moore’s law. One of your questions was, What happens when things are changing rapidly? One answer is modularity becomes more valuable and is rewarded. Platform companies with surrounding ecosystems are one symptom of this trend.

Can you tell us how modularity is an advantage in these fast-moving times?

Modularity is a design principle for complex systems, both natural systems and artificial systems. I’m primarily interested in artificial systems like product design, system design, and organizational design. Modularity is a way of organizing the components of a system so that individual components are grouped into modules. Flows of material and information are dense within modules but nearly decomposable across modules.

Modularity is first a clustering principle, with modules densely connected within themselves and loosely connected across themselves. It is also a hierarchical principle, because in modular systems the components need to work together. They are not truly independent things. Interoperability is obtained through hierarchical design rules.

In natural systems with modularity, early designs constrain later paths of evolution. But in artificial systems, there is generally a set of rules that modules need to follow to work seamlessly with one another.

This principle of design greatly helps with cognitive complexity. It makes systems more understandable to human beings, and it supports both evolution and variety. In a modular system, you can take out a module and put in another module, or you can simply add a module. These actions are not constrained as they are in an integral system where everything depends on everything else.

One of our insights last year was that we saw that as companies became more digitally mature, more able to compete in a digital world, they also reported organizing according to cross-functional teams more significantly. Is this an example of the modularity you’re talking about?

If the teams are aimed at discrete projects, yes. It would be saying, Let’s bring into each project all the functional resources and capabilities that the project requires. I did not see it as a corollary of modularity, but it’s not inconsistent.

Who would be an example? Is it Google or Amazon or a platform-based business where you see this modularity in practice today? Or are you seeing it across many companies?

Platforms support modularity in the sense that they allow others to build in the platform as long as they are cognizant of the platform’s rules. But a platform does not itself have to be modular. You could have a platform — an open platform such as Google — supporting many modular complements. That was also the case for Intel and Microsoft: The Wintel platform allowed software and hardware developers to build modules upon the base that Intel and Microsoft provided.

But Intel’s products are not highly modular. In fact, there are reasons deep in the physics of semiconductor manufacturing that are antimodular. These days a semiconductor fabrication plant might carry out 2,000 separate steps to make an advanced microprocessor. You don’t spread those 2,000 steps across 2,000 enterprises. It’s not efficient to do so.

In fact, the inside of Intel looks very much like a classic, hierarchical firm. At the turn of the 19th century and in the early 20th century, a set of organizations emerged that were hierarchical, vertically integrated, and exercised close control over workflow and job design. Those organizations were responding to the needs of the dominant technologies emerging at that time, technologies that supported the efficient flow of goods through both factories and supply chains. Flow production needs a different type of organization than modular recombination.

Is the suggestion that the more modular you are, the better able you are to innovate?

Yes — and evolve, recombine.

If that’s the case, then what would be a recommendation for a company like Intel, whose business is clearly more of a flow-oriented business?

There is a tradeoff, and this is where the rate of external technical change comes in. You have to do business in the environment that you face. Modularity allows flexibility and evolvability within the system, but at the expense of some degree of efficiency. You can modularize flow, but you will then need buffers.

That is exactly the opposite of what we now think of as lean manufacturing, which gets rid of all those buffers. If you have a flow production system, you have to decide where the break points are going to be and whether it’s worthwhile to break up an integrated flow to have the flexibility to change the front end or the back end or parts in the middle. Sometimes it’s not worthwhile.

Lean manufacturing and modularity: Are they antithetical to one another, or are they just different forms for different environments?

I don’t think they’re antithetical because within each module, there’s usually a flow process. You have to create the module, right? It doesn’t spring into being. Even the smallest modules must be designed and produced. That’s going to be a flow process with some amount of structure.

Thus, modularity and flow are not either/or. It’s where you want to put break points in the flow. In general, the faster the rate of external technical change, the higher the value of modular flexibility. But you will give up some short-term efficiency in exchange for flexibility.

It’s the level of analysis or the level at which modularity occurs.

Yes. For example, the tight coupling that Toyota Motor Corp. brings to production flow has some definite benefits in supporting learning and innovation. Modularity supports learning and innovation because it’s easy to swap things in and out, but the architecture itself tends to be quite inflexible and opaque. You have flexibility to change modules, but the overall architecture is generally fixed and very difficult to change, and sometimes very difficult to understand.

Toyota said, “OK. We’re going to tightly couple all parts of our production line,” so if any one part goes down, the whole line goes down within three minutes. It’s really costly to stop a line, but that practice made problems within the line visible. If there had been buffers between the different steps, as was common in traditional production lines, then the line might never go down, but you’d never figure out what things could be done better. The line would always appear to be OK, but there would be inefficiency in terms of inventory and in the steps themselves.

Are we seeing a trend from a more flow-oriented organizational design to more modular designs as the technology moves more quickly?

We have seen that trend from the time Moore’s law came into effect. The trend can be traced back to the IBM System/360, which was the first modular computer system. It was made modular to support high levels of customization across IBM’s customers. Then, it turned out that the modularity of the parts allowed the architecture to survive for a very long time. That was a surprise to IBM. In the 1950s, IBM had been doing new computer architectures every five years. Parts of the System/360 architecture still survive in IBM servers.

However, the process of fabricating integrated circuits made possible very high rates of miniaturization and therefore cost reduction and quality enhancement in individual devices. The standard version of Moore’s law says that the number of components on a chip will double and costs drop by one-half every 18 months or so. That rate of improvement was made possible by the physics of CMOS [complementary metal oxide semiconductor] silicon fabrication.

In the early 1970s, [physicist] Carver Mead investigated the physical limits of miniaturization in CMOS technology. Basically, he found that physical limits wouldn’t be binding for at least 20 years. That made possible the dynamic of having a new generation of semiconductors with twice as much capability that cost half as much every 18 to 24 months.

That’s the point where you get the high rewards to modularity. You don’t want to commit to any given generation of chip, and you want to be able to upgrade the different chips of a system independently. In the old days, you wanted to be able to swap out your disk drive without buying a new computer, and you also wanted to be able to buy a new computer and have it work with your disk drive. That level of flexibility was enabled by high levels of modularity in the design of computer systems.

This begs a question. In the current state of digitization and/or design, is there anything different, or is this just a continuation of what we’ve been seeing since the 1960s and 1970s?

Moore’s law is slowing down. Even though there is a commitment to modularity and there are many ecosystems where modular innovation is distributed across several hundred or several thousand independent enterprises, there is also, in parts of the system, a tendency to make things more integral, sometimes without knowing that it’s happening.

For example, software developers aspire to create modular systems, but most software is not really very modular. In fact, if you don’t separate the teams designing software components, they will build a highly integral system that will be very difficult to change. We are living now with the legacy of 50 years of software development, and programs built in the ’70s, ’80s, and ’90s that are still in existence are not modular and their structure is often unknown.

Thus countervailing forces maybe at work, with both lower rewards to modularity because of slower rates of technical change and higher costs of changing systems because of their growing complexity and opacity.

What about agile software? Is that factoring into this modularity and this rapid rate of change?

Agile is fascinating. Software poses very interesting production problems because the effort of creating it and its structure are invisible. Also, every software program can be customized at the cost of added complexity. That’s totally different from steel or automobiles where effort and structure are visible and you don’t want to customize anything.

Agile is a very excellent response to the problem that software development processes were having when nothing could get done quickly. Agile breaks up the software development process into well-defined units that are meant to be modular. It asks developers a priori to define their tasks in ways that will add up to a whole system. The problem with agile is that it’s completely process oriented. There’s no guarantee that the product of an agile process will be a modular, evolvable software program.

In a standard agile process, you do these builds, you do a set of scrums, and then you build the system and see if it works. Maybe the system breaks and the testers identify component A and component B done by two different teams as the components that are not working together. An acceptable process in agile is to put those two teams in a room and tell them to fix the problem.

That just guarantees that those two components, at the end of the day, are going to be connected in ways that will not be documented and will not be visible to managers of the process. In terms of evolving the system, you have tied those two components together so that they can only be changed together. You can’t take one and leave the other. They are no longer true modules.

Thus, an agile process is an interesting partial solution but not a complete solution.

As an example of a different approach that does deliver modular code, there is a post on the web from 2002 by a former Amazon employee named [Steve] Yegge. Comparing Google to Amazon, he said Amazon enforced the rule, the constraint, that there would be no back doors — that all interactions between each software system component had to be via published APIs [application programming interfaces].

How did Amazon know to do that? Who made that decision?

My source is the published post, which I highly recommend. According to Yegge, [Amazon CEO] Jeff Bezos mandated the practice and said you would be fired if you violated this mandate. It was completely top-down. I have no insight into Bezos’ thought process.

If organizations are moving toward greater modularity in response to or to deal with rapid technological change and complexity, does this have any implications for the skill sets that either employees need or leaders need to lead in these types of organizations?

Regarding skill sets for employees, I can’t do any better than Toyota. They instill the idea that every operator is part of not only the process but the process of continuous improvement, and they do many things in both recruitment and training and through their reward system to make that real.

One trend I see is that the effort and skill needed to do tasks are becoming less and less observable. If effort is not observable, you must have motivated and knowledgeable people who can steer their own way.

This is a big lesson from the open-source movement: People who self-organize and self-select to create a code base can do it at a very high level of quality with great efficiency. Many software organizations are adopting some of the open-source principles as they manage their software teams.

For leaders, I would say, software is eating the world. A lot of senior leaders seem to want to delegate everything to do with software and information technology to experts. I think that is a prescription for disaster.

What would these senior managers need to do if delegating is not a solution?

They do not need to learn to code, but they must learn to audit the modularity of their IT and core software systems. Most IT systems are completely unauditable. They grow willy-nilly into great masses of interacting applications. The people closest to the system have a very local view of how everything is hooked together. The same is true of software applications.

As a senior leader, Bezos understood enough about both how people actually built software and how he wanted his software to look that he could make strong rules and see them enforced. Many senior managers come from a nondigital environment and see software and information and digitization as kind of a black art.

They flock to fads. It’s so easy to say, “We must have this. Hire some people.” It does not make sense.

This need to audit the modularity of the IT systems — and I would assume the organization, as well — is this something senior managers can learn or do companies need to find managers with those skill sets?

I actually cofounded a startup that provides some of these services. Speaking from experience, it’s a long, hard path. You need programs to educate people at all levels of the organization because the senior officers — it isn’t always the CEO — have to be willing to understand what the software system actually looks like versus what it’s supposed to look like. Managers must have visibility into the actual modular structure of a code base or an IT system.

There can be no success unless very senior people understand and are calling for this capability. In fact, there’s a real agency problem vis-à-vis the developers because all things being equal, they would prefer that people not look too hard at their code.

As you’ve dealt with these executives, are there any big barriers in their mindset? What are the biggest challenges they wrestle with as you try to get them up to speed with the necessary skill set?

The subject is inherently difficult. You have to have a certain amount of patience to learn a new way of thinking about the world. Software and IT systems are networks, so you need some patience to learn that this is how a network works, this is how change is propagated in a network, this is a good network structure, and this is a bad network structure.

Some high-level managers don’t want to be that operational. They think of operations as something that operations people do.

As you look to the next decade, with respect to technology and organizational trends, what do you think the big story is going to be? Or is it just a continuation of what we’ve seen in the past?

I’m a historian at heart. I look at the past. I do think that the companies that take their operating models seriously, and especially their digital operating models, how the software and IT systems are really made and manufactured, I think those companies are going to be big winners.

What do you mean by taking their digital operating models seriously? Are you speaking of investment or just paying attention and understanding it? What does that look like?

In my years at Harvard Business School, I’ve always taught finance. This year I’m teaching technology and operations management. I tell my students: This is the future. When technologies are new and changing, it’s important to understand how they really work. For example, when the steel industry was getting off the ground in the 1880s and 1890s, Andrew Carnegie and Henry Clay Frick knew all about steelmaking. Then things evolved, and the technology stabilized. By the 1920s, it was more important for the chairman of United States Steel Corp. to know about antitrust law than about how steel was actually made.

We’ve seen many trends over the whole of the 20th century where senior managers became involved in strategy. They became involved in finance. They became involved in marketing. It was less important to know how things were actually made.

The new technologies are digital technologies. Here it’s really important for managers to know what their DevOps people are doing and how the software that is so critical to their products and their enterprises comes into being. They don’t have to go down to the code level, but the software developers should not be allowed free rein to create any architecture they want.

Topics

Digital Leadership

As organizations rely increasingly on digital technologies, how should they cultivate opportunities and address taking risks in a fast-moving digital market environment?
More in this series

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.