Software-Based Innovation

Reading Time: 47 min 

Topics

Permissions and PDF Download

A revolution is now underway. Most innovation occurs first in software.1 And software is the primary element in all aspects of innovation from basic research through product introduction:

  • Software provides the critical mechanism through which managers can lower the costs, compress the time cycles, and increase the value of innovations. It is also the heart of the learning and knowledge processes that give innovations their highest payoffs.
  • In many cases, software is the core element in process innovations or in creating the functionalities that make products valuable to customers. In others, software is the “product” or “service” the customer actually receives.
  • Software provides the central vehicle enabling the inventor-user interactions, rapid distribution of products, and market feedback that add most value to innovations. Consequently, customers — and the software itself — make many inventions that the company’s technologists, acting alone, could not conceive.

All this demands a basic shift in the way managers approach innovation, from strategic to detailed operational levels. Some portions of the innovation process may still require traditional physical manipulation, but leading companies have already shifted many steps to software. And those who do not will suffer. Managers can shorten innovation cycles through other means, but through properly developed software, they can change their entire innovation process, completely integrating, merging, or eliminating many formerly discrete innovation steps.2 In the process, they can dramatically lower innovation costs, decrease risks, shorten design and introduction cycle times, and increase the value of their innovations to customers.

Software Dominates All Innovation Steps

Innovation consists of the technological, managerial, and social processes through which a new idea or concept is first reduced to practice in a culture. Discovery is the initial observation of a new phenomenon. Invention provides the first verification that a real problem can be solved in a particular way. Diffusion spreads proved innovations broadly within an enterprise or society. All are necessary to create new value. Software dominates all aspects of the cycle from discovery to diffusion.

· Basic research.

Most literature searches, database inquiries, exchanges with other researchers, experimental designs, laboratory experiments, analyses of correlations and variances, hypothesis testing, modeling of complex phenomena, review of experimental results, first publication of results, enhancements to existing databases, and so on are performed through software. To a large extent, software search tools determine what data researchers see and what questions they ask. In many frontier fields — like astronomy, semiconductors, or biotechnology — researchers may be able to observe, measure, or precisely envision phenomena only through electronic measures or electronic modeling (software).

For example, in 1991, a group at IBM’s Watson Research Center completed calculations from a full year’s continuous run on its high-powered GF 11 computer. Based on known physical evidence, the group had established the masses of seven basic particles, including hadrons, important in quark research. By 1995, two further years of calculations had established both the mass and the decay rate of an elusive subfamily of hadrons, called glueballs, which had gone unrecognized in preceding laboratory experiments. These massive computations had both discovered a new particle and provided an important confirmation of quantum chromodynamics, the theory governing the behavior of quarks.3

· Applied research.

Most of the above activities are common to applied research as well. However, at this stage, practical data about market, economic, or performance phenomena become important. Most major innovations are preceded by a defined need.4 In many fields, data about the marketplace, user patterns, environmental trends, or specific constraints to application now come directly from software. Examples include market shifts sensed through electronic point-of-sale (EPOS) data, epidemiological measurements of medical problems or outcomes, satellite scanning of environmental changes, and real-time performance data about financial transactions, communication systems, or marketing-distribution programs’ effectiveness.5 New object-oriented software, like that developed by Trilogy Corp., is rapidly extending these capabilities across a wide spectrum of industries.6

· Development.

At the developmental level, virtually all design of physical systems, subsystems, components, and parts now occurs first in software. Most things — from buildings to ships, aircraft, automobiles, circuits, bridges, tunnels, machines, molecules, textiles, advertising, packaging, biological systems, dams, weapons systems, or spacecraft — are first designed in software. Specialists try to design into their models all the known science-technical relationships, physical dimensioning, system constraints, flow rates, and dynamic response patterns understood from earlier technical work, experiments, tests, or operations. CAE/CAD/CAM systems interconnect whatever knowledge exists about these physical-science systems and their potential manipulability in manufacturing. Other software systems test CAD representations of potential designs against anticipated variations in use or operating environments — without building physical models. Simulations often allow much less expensive and more effective test information than the experimenter could possibly afford to achieve through physical models. This is especially true for very large-scale, extreme environment, submicroscopic, complex dynamic-flow, or potentially dangerous systems, where physical experimentation might be impossible.

· Manufacturing engineering.

Software now provides the same kinds of data gathering, analytical, and test capabilities for complex process design and manufacturing engineering as we described for product designs. In process design, software allows inexpensive experimentation, yield prediction, workstation design, process layout, alternative testing, three-dimensional analysis, network manipulation, quality control, and interface timing capabilities that would otherwise be impossibly expensive. Software is especially helpful in allowing workers, technologists, and managers to visualize solutions and work together on complex systems. Further, knowledge-based systems now allow the design coordination, manufacturing monitoring, and logistics control needed to find and source innovative solutions worldwide.7

· Interactive customer design.

Software models and shared screens allow multidisciplinary (marketing-manufacturing-development) teams to interact continuously with customers, capturing their responses through video, audio, physical sensing, and computer network systems. Through software, customers already participate directly in the design of new or customized fabrics, furnishings, entertainment services, auto and aircraft parts, homes and commercial buildings, insurance, legal, or accounting products.8 Such customer participation is a crucial element in both lowering risks and enhancing the customer value of designs. More important, by designing “hooks” on their software (to allow others to innovate further on their own), companies can leverage their internal capabilities enormously by tapping into their customers’ and sophisticated suppliers’ creative ideas.

· Post-introduction monitoring.

After new products are in the marketplace, software can upgrade their effectiveness in use (aircraft), oversee their proper maintenance (elevators), and add value by introducing new knowledge-based features directly into the customer’s system (computers, financial services, or accounting systems). Manufacturers have placed sensing and maintenance software into various products — from health care devices to automobiles, power generation systems and home appliances — that can anticipate and even automatically correct potentially dangerous or disruptive failures. Service companies — like utilities, telecoms, retailers, airlines, banks, hospitals, or wholesalers — use in-line sensing to: (1) ensure intended response times, signal levels, accuracy of information, and performance reliability and (2) search out their customers’ changing utilization patterns to improve products further.

· Diffusion and organizational learning.

After new concepts are successfully tested in the marketplace, software helps implement introductions rapidly across wider geographic areas with higher accuracy, consistency, and performance reliability than would be possible otherwise. The practice is widespread in service enterprises, such as fast-foods, maintenance, accounting, reservations, or financial services companies. But it is equally essential to firms like Asea-Brown-Boveri, Ford, and Boeing in transferring physical product designs or manufacturing know-how from one location to another. Further, software is becoming the critical element in facilitating the organizational learning that continual rapid innovation requires, whether in services or manufacturing. It enables both experienced and less skilled people to perform at much higher levels by incorporating important knowledge components into the equipment that personnel use and the databanks that contact people have at the customer interface. Brokerage houses, accounting firms, banks, insurance companies, and product distributors all depend on software to enable their people to “jump the learning curve” for rapid product and process introductions.

· New value-added systems.

Software is often the strategic element in unlocking higher value-added opportunities and indeed in restructuring entire companies and industries.9 For example, in the late 1980s, Intel was concerned about the commoditization of integrated circuits (ICs). Intel’s chips were being copied, and its patents were running out. In early 1991, Intel CEO Andy Grove asked his executives to create the basis for a new personal computer with much higher performance, lower costs, and video communications capabilities. This required contributions from many other companies including entertainment, telecommunications, software, and systems groups. The core of the challenge was in software. Grove commissioned a team to learn about all aspects of personal computing, including software, that Intel had not earlier considered its charter. Ultimately, these initiatives became a comprehensive strategy to move Intel from a narrow role as a “semiconductor supplier” to that of a system supplier. These efforts grew into the Intel Architecture Labs, anticipating new uses and applications for personal computers, entertainment, and computing. Through its software interconnections and alliances, Intel has become a center for changing the entire value-added concept of its industry.10

Software for Fast Cycle Innovation

There are many aspects to improving the cycle time of innovation.11 But none is more crucial — and has received less attention — than software. As the examples below demonstrate, software can entirely eliminate many traditional steps in the innovation process. It can consolidate others into a simultaneous process. And it can provide the communication mechanisms and disciplined framework for the detailed interactions that multidisciplinary teams need to advance complex innovations most rapidly.

For example, Boeing recently went directly from software into production of its $170 million 777 aircraft, cutting out many sequential steps formerly in the design cycle. It installed 1,700 workstations to link some 2,800 engineering locations worldwide. Using rules developed from earlier scientific models, wind tunnel tests, field experience, and supplier-customer models supporting its systems, Boeing’s 250 different multifunctional “design/build” teams could pretest and optimize the structural elements, operating systems, and consumer convenience aspects of each major component in the aircraft’s four-million-part configuration. Boeing’s three-dimensional, digital CAD/ CAM software eliminated many previous blueprint, specification, tool and die making, and physical prototyping steps.

Instead, software provided each department or external fabricator with the capacity to produce its tools, parts, or subassemblies directly from digital electronic instructions. It also allowed Boeing to cut or mold whatever physical models were needed for its own wind tunnel, systems, or stress tests. Because all the specifications for interacting suppliers could be coordinated directly from the software to ensure precise fits, assembly tolerances, surfacing, and materials compatibilities, there was a reduction of 60 percent to 90 percent in prototype errors and rework costs.12

The software compressed or eliminated many of the “build and bust” tests that were previously necessary and made needed “first off” tests of physical components and systems much more reliable. At all phases, software systems allowed many groups — within and outside the company — to operate in parallel without losing interface coordination. The system’s 1.8 trillion bytes of production data coordinated all downstream production and sourcing decisions. All this substantially decreased design cycle times, costs, and potential errors. But the real test was that software produced a better quality, more flyable aircraft at lower cost.13

In chemistry and biotechnology, companies generally attempt to design and assess new molecules as much as possible in software before building actual chemical structures. Using well-researched rules about how different components will combine, biotechnology researchers can pretest the most likely and effective combinations for a new biotech structure. They can assess which receptors are most likely to respond in a certain fashion, how to relocate or reshape a molecule’s receptor or bonding structures, and what transport mechanisms can best deliver “bonding” or “killer” agents. Researchers can often observe actual interaction processes using electron or scanning-tunneling microscopes that can extend observation capabilities by orders of magnitude beyond ordinary optical limits.14 Such equipment is itself largely software driven by electronic sensing and amplification. Electronic models, based on the best known laboratory data about biochemical processes, shorten cycle times for process development and allow detailed process monitoring to ensure quality during experimental and early scale-up phases.

Software Enhances Innovation Results

Although important, decreasing individual experimentation, modeling, and scale-up times is not always the crucial issue. Frequently, the optimizing calculations for designs or operations are so complex that they could not be done at all without computer capabilities. In other cases, lacking computer models, scientists would have to rely much more on hunches and limited experimentation, decreasing both the variety and quality of experiments and the capture of knowledge from these experiments. Human inaccuracies would quickly throw off calculations, leave out critical variables, cause inaccurate experiments, and lead to wrong results. Without software, innovators often could neither adequately measure nor interrelate the details of large-scale scientific experiments, the physical reactions within a system, or the interaction patterns between the system and important external forces. Examples include gene sequencing, weather and environmental system analyses, large-scale integrated circuit designs, atomic orbit calculations, multiphase gas flow analyses, or space flight trajectories.

Further, from an organizational viewpoint, in analyzing or developing such systems, it would be essentially impossible for the required number of different knowledge specialists to personally work together effectively. They could not achieve results within reasonable time frames or with needed precision without software tools and software-based communications devices. The organizational software support for large-scale systems designs — like those for construction innovations, space shots, Ford’s Taurus, or Boeing’s 777 — often becomes as important as the physical-design software.

Software Becomes Inventor

Software enables more sophisticated innovations than humans could achieve unaided. In many cases, software actually becomes the discoverer or inventor. Software designed as a learning system frequently generates answers beyond the imagination of its creators. It may identify and verify totally new patterns and problem solutions. It can even be preprogrammed to search for, capture, and flag the “fortuitous incidents” or “anomalies” that are often the essence of discovery. Properly designed software systems can actually create new hypotheses, test the hypotheses for critical characteristics, analyze potential system responses to exogenous variables, and predict counterintuitive outcomes from complex interactions. Software can learn from both positive and negative experiments and capture these experience effects in data files.

For example, the Cochrane Collaboration is a massive effort to collect and systematically review the entire published and unpublished literature on the roughly one million randomized control trials of medical treatments that have been conducted during the past fifty years. Because they are so difficult to access, most of those experiments’ results have been ignored or otherwise lost to practitioners. The collaboration will collect, catalogue, and update these reviews to synthesize the latest state of knowledge about every available therapy or intervention and give its implications for practice and research. In the past, when such clinical data was systematically collected and analyzed, interesting new patterns were discovered that changed many practices like mammographies, fetal monitoring, mastectomies, and prostatectomies.15

In large-scale systems, “genetic” and related learning algorithms and software can often identify patterns, optimize research protocols, and define potential solutions by trial and error much more efficiently than can either direct physical experimentation or a preplanned sequence of hypothesis tests. Such programs can economically attack problems that were of unthinkable complexity a decade ago. In business applications, self-learning programs can identify developing problems or opportunities in the competitive environment, suggest the most likely causes and alternative solutions available, eliminate those of least promise, and pretest or implement promising new options — as they commonly do in telecom switching, power distribution, vehicle routing, or ad campaign targeting and modification.

Leveraging Value Creation

A major contribution of well-designed software is that it allows the original innovator to tap into the creative potential of all the firm’s customers and suppliers. Since more than 50 percent of all innovation occurs at these interfaces, this creates a substantial leveraging of the company’s own capabilities. By designing “hooks” to allow customers to modify the product for their own use, the software can help generate further options and valuable uses that the original innovator could not possibly anticipate.16

For example, AT&T could never have forecast the full range of uses to which its institutional and home customers would ultimately apply the flexible software capabilities designed into cellular or digital telephone systems. Similarly, none of the personal computer’s innovators could possibly have foreseen the enormous variety of uses to which such computers were put. By introducing flexible software that allowed users to program for their own special needs, microcomputers entered and created a variety of unexpected marketplaces — and generated many unanticipated options for new hardware and software. Xerox-PARC produced the first icon-graphics interface, but even Xerox’s sophisticated management did not appreciate its potentials. Only after Apple and Microsoft put an actual product into customers’ hands did the innovation’s value become evident. Early buyers quickly used the software to create greater value for their customers, who then used the results to add value for still other customers. Even now, no one can calculate the total value produced, but it is clearly thousands of times the value captured by Microsoft or Apple.

Because of the low-cost experimentation that software permits and the dominating importance of the value-in-use it creates for customers, increasing the “efficiency” of the program steps in software-based innovation is nowhere near as important as the potential value creation the software can create through the functionality it generates for customers and the multipliers of additional benefits further customers obtain from it. Too much attention is often placed on decreasing the cost of design steps and shortening internal process times in the innovation cycle rather than focusing on the critical internal learning and value creation processes that software facilitates.17 Both are essential for effective innovation in today’s hypercompetitive world.18

Identifying Opportunities Interactively

When managers think about software for innovation purposes, they tend to concentrate on CAD/CAM, EDI, process monitoring, or imaging software. However, external software (database, strategic monitoring, market modeling, or customer interaction software) may be equally important. Software can identify subtle supplier, user, or environmental trends as potential problems or market opportunities long before personal observation might. Common examples are the “variance analysis trading” programs used in investing, the EPOS systems of retailing and fast foods, the customer monitoring programs of credit card or airline companies, or the “early warning systems” contained in strategic intelligence, environmental monitoring, or weather models. Given adequate models of external environments, experimenters — by monitoring experimental stores, focus panels, or sales counters — can test the impact of different design combinations and permutations in various use or “niche market” situations to see which design has the greatest potential value in any single use and across the system. Such software helps define what flexibilities are feasible and optimum to satisfy desired niches and future growth patterns. And it can avoid overselling of new ideas by technical staffs or their undervaluing by old-line managers.

Further, software representations of an innovation can become a sales tool, allowing individuals or customers to visualize a product or concept more easily, experiment with different features, and customize the product for their special needs. Software for this purpose is advancing rapidly. By pressing buttons in a distributor’s office or on their home telephone or personal computer, customers can preview what features they want, see physical relationships, and actively design their own products. Software-supported interactive design is becoming common — both before and after a product’s initial introduction — in many fields, including architectural, automobile, plumbing, services, financial, medical devices, boot and shoe, computer accessory, and integrated circuit markets. Unfortunately, very few companies have effectively integrated their market intelligence and product design systems. This is among the most exciting challenges and opportunities for innovation in most firms. By directly connecting users through software to their design processes, companies can virtually eliminate time delays, error costs, and product introduction risks in innovation.

User-Based Innovation and Virtual Shopping

The Internet and the World Wide Web have become prototypes for this new mode of user-based innovation. All innovation on the Internet is in software. Innovators reduce their concepts to practice in software form and present them electronically to customers on the Internet or Web. Customers can either utilize the innovation in its “offered form” or modify it for their particular uses. They can also ask the selling company to make the necessary modifications and transmit the results directly to them —by the Internet or other means. Institutions or individuals seeking new solutions can use the Internet as a virtual shop for potential answers, pretest those answers on their own systems, and purchase if desired. Conversely, they can post their needs onto the network to attract potential solutions. A manufacturer can solicit design proposals for new components, product features, or systems anywhere in the world. Or a farmer in the Philippines can query a worldwide network of agricultural and livestock knowledge to find out how best to eliminate an obnoxious weed or “buffalo proof” a fence.

Further, sophisticated users can scan the Web and use advanced visually oriented programs like Visual Basic or Power Object to modify the Web’s offerings for their own or further customers’ use. With high-level languages, desktop computer users can simply create electronic products or programs that the computer priesthoods of the past would have found intransigent, if not impossible. They can immediately test their solutions in terms of their specific needs. In essence, the innovation process has been inverted. The customer has become the innovator, and all intervening steps in the innovation process have disappeared.

New methods for introducing product concepts, transacting, and paying over the system are constantly appearing. As these infrastructures come into place, they are changing the entire nature of innovation worldwide. Anyone with access to the Internet and Web can present innovations instantly to a worldwide marketplace, obtain interactive market responses, and readapt the innovation for specific user purposes. All innovators in the world thus become potential competitors. And all customers and suppliers become potential sources of leverage for internal innovation.

Managing Software Innovation Processes

How can executives best manage this software-based, interactive innovation in their companies? The answer lies not in hiring more programmers, but in effectively managing innovation processes through software and by learning to develop and manage software itself more effectively. The rest of this article will outline how successful companies do this.

Three Critical Systems

Software systems have long been used to plan and monitor “hard” innovation processes. PERT (program evaluation and review technique) and CPM (critical path method) were among the early techniques touted to improve these processes. However, these are not the focus for innovation today. Rarely have PERT-CPM implementations exploited the capabilities that open software systems, self-learning programs, and other interactive software processes now present. To provide such interactive and learning capabilities — while supporting the depth of detailed expertise that each important subsystem demands — current software structures generally revolve around three relatively independent, but interacting, modules connected by a common language and set of interface rules:

  1. The database and model access system — linked to external scientific or technical sources.
  2. The processing engine — focused on internal operating parameters.
  3. The environmental and user interface system — linked to market and environmental sources.

The database system embodies both the current raw data and the state-of-the-art external models a manager needs to manipulate that data effectively. If properly set up, the system is constantly updated to include the latest available references, user practices, experiments, transactions, operating data, and models. Structuring the database for constant refreshment — and maintaining an open and precise classification system to access its information — are among the most difficult of all software system problems. Yet they are most important to continual innovation.

New experimental data and tested models provide constant clues as to needed or possible changes in the rate processes or interaction weightings that can improve a technical system’s performance. New conceptual models from research worldwide may redefine both new data needs and the relevancy of old solutions. Clearly, those who continue to use old paradigms after they are subverted get wrong answers, as do those who use last year’s information or solutions for today’s problems. Since even the most precise models and operating systems are only as useful as their databases, many successful firms — like American Express, Intel, Reader’s Digest, or American Airlines — treat their databases as their most valuable assets. Unfortunately, they often do not link them effectively to other critical modules for innovation purposes.

Software engines, containing the primary processing, manipulation, and operating systems logic, tend to receive more attention. This area has traditionally been the glamorous portion of software development, where new or unique algorithms can create fame for their programmers. This is where proprietary intellectual property seems to be created, rather than in improving the flexibility and quality of the data inputs or the manipulability of outputs. Engine design is extremely important to innovation in terms of allowing flexible experiments, increasing operating efficiencies, and ensuring output quality. Nevertheless, in recent years, many of the highest innovation payoffs have come from expanded database availability and easier-to-use software interfaces — such as those first provided by MacOS or Windows and later by Mosaic, Java, or HTML — rather than through increased sophistication in manipulating the data. For efficiency, however, each of these innovations also required a new engine —just as next-generation speeds and power demands will probably require new engine architectures, like massively parallel processors.

Interfaces are crucial in making the three systems work together and in enabling various users’ access to important databases for their special purposes. Compatible — preferably seamless — interfaces are critical to leveraging innovation internally and with external customers and suppliers. Software like HTTP/HTML and SQL have been major steps in this regard. Navigator, Yahoo, and other search and agent software systems have made the Internet and Web much more accessible. But their originators say that intranets within companies will ultimately be their largest users. The best interfaces are as unobtrusive as possible. They are usually a result of interactive development with users throughout all phases of both the design and implementation processes.19 Well-designed interfaces also incorporate future “hooks” that allow external customers and internal users to create or explore many unanticipated innovative possibilities over time. Conversely, converting internal systems into tools that can readily access details about changing customer interactions and rapidly advancing scientific-technical environments makes them into invaluable cornerstones for fast, flexible innovation.

The highest potentials for value-added innovation lie in direct and integrated connection among user interfaces, self-learning operations engines, and thoroughly compatible external and internal databases. But few companies have successfully achieved this continuity. More have done so in services than in manufacturing. What can be learned from the experiences of those who have been successful?

Interacting Subsystems, Not Megasystems

First, end-to-end integration through a single “megasystem” is extremely difficult to accomplish.20 Those who have been most successful have concentrated individually on the three critical subsystems — databases, engines, and user interfaces — and used carefully predefined interface standards to link them effectively. This leaves each subsystem free to contribute as quickly as possible, allows incremental implementation and interactive learning, and avoids the long and costly development times for which megasystems are so notorious.

Second, the most successful companies concentrate on developing system software that, like the World Wide Web, insulates users from having to understand the complex rules and sophisticated methodologies of its internal operations. Through user-friendly prompts and menus, these intranets enable connecting parties to query and customize the system’s central knowledge for their own purposes. Thus they encourage maximum innovation around each user’s specialized needs. Rather than hoarding or controlling all information, effective architectures help decentralized users and customers capture much of each innovation’s value for themselves. Properly programmed, the systems can learn from their decentralized users’ experiences and make this learning available instantly to others on the network. Financial service systems provide a classic example that is being widely emulated in other service systems like fast foods, retailing, or airlines.

A brokerage or insurance company’s central engine manipulates all transactions data from the marketplace, embodies the most updated financial methodologies and tax, accounting, or regulatory rules for doing business, and provides the access and data that decentralized agents or brokers need for adapting the firm’s services to specific customers. At headquarters, cadres of mathematically sophisticated analysts both constantly upgrade the system’s capabilities and design new products for all the parties it serves. Centralized software commands much of the firm’s own internal investment portfolio, based on preprogrammed rules and changing regulations, tax structures, and economic or market trends.

As the center creates new products in response to these changes, they are instantly diffused to broker or agent offices for adaptation to individual customer needs. Other software monitors individual customers’ transactions and past investment patterns. It helps brokers detect significant changes and signals local brokers when to adjust their clients’ portfolios. It warns customers if unusual patterns indicate possible fraud or misuse of their assets and provides up-to-date account data on demand. For effectiveness, the central system’s interfaces must match those of both upstream information providers (like government or market data sources) and downstream users (providing the simplest and most transparent interconnections possible).

Similarly, the system architectures of leading product companies (like Ford, HP, Nike, Sun, or Boeing) allow researchers, designers, manufacturing engineers, or marketers to call in virtually unlimited modules of capability from databases, on-line operations, or contracted sources anywhere in the world. These companies’ capacity to find solutions, mix-and-match options, and test outcomes is paced primarily by their internal system’s modeling software and capability to interface upstream and downstream knowledge bases. Their systems allow them to tap into worldwide sources of innovation and to connect these in new ways to their customers. Their suppliers can inform them precisely about new options, process capabilities, or problem solutions through software. In conjunction with on-line customer systems, advanced companies and their suppliers can design and pretest a wide variety of innovations in electronics. These include soft-goods designs (interactively on electronic pallets with buyer-customers), aircraft performance or customer comfort designs (through mathematical or graphics simulations), architectural designs (in three-dimensional software models that customers can “walk through”), alternate designs for shoreline control strategies (through large interactive models with the actual stakeholders participating), or advanced molecular designs (in simulated life systems or flow process environments).

Integrating Innovation Subsystems

Many companies have partially integrated their software systems, from their marketplaces through production processes. Such systems now allow electric power systems to respond instantaneously to changing demand loads. Oil companies routinely plan their drilling, shipping, pipeline, and refining activities through such models. An entire shipping fleet (like Exxon’s) can be redirected within a few minutes in response to changing market price, supply, refining, shipping, tax, or tariff situations. Within a few days’ time, companies like Ford can reassign an entire automobile line’s sourcing based on changes in exchange rates or other critical market characteristics.

However, few companies have interlinked their marketing and operations systems with their scientific databases and design processes. Such integration can substantially enhance the responsiveness, degree of advance, and customer impact of innovations. It can also significantly lower innovation risks, investments, and cycle times.

For example, Fluent, Inc., a division of Aavid Thermal Technologies, develops computational fluid dynamics software to analyze fluid-flow phenomena in industrial processes. Based on equations describing the physical effects of fluid flows under various circumstances, Fluent’s software models can handle the entire innovation process from geometry definition to computation, design evaluation, and process control. The model for each specific application is constantly updated to reflect both new research and experimental findings, as well as real-world effects in actual customer-use situations. Fluent’s software “learns” from these inputs and captures the latest findings in its analyses and design recommendations.

For example, design and selection of proper mixing equipment is critical in the scale-up of chemical processes. Fluent gathers information on the performance of various mixing devices from tests performed by mixing equipment manufacturers, such as Lightnin and Chemineer, and models the performance characteristics of these devices in its software. Process engineers at companies like Dow and DuPont then use these computer models to simulate the performance of the mixing devices for the specific fluids, flow conditions, and constraints of their processes. They can test a variety of mixing configurations in software — allowing process engineers to select the “best” design reliably and inexpensively, bypassing a number of costly scale-up tests. Extensions of Fluent’s capabilities can enable an automobile company to pretest the aerodynamic characteristics of various car designs in software, aircraft companies to pretest wing or fuselage designs, or chemical producers to pretest various multiphase flow designs for processes without building costly prototypes and facilities.

Virtual Skunkworks

Such software systems not only decrease the time and manpower cost of development, they also create a virtual skunkworks that substantially lowers the investments needed for lab tests, pilot plants, and scale-up and increases the knowledge output of the innovation process. Under old mechanical or chemical engineering design paradigms — because interaction parameters were poorly understood and very complex — a company would proceed through a complicated series of ever larger physical test, pilot, scale-up, and plant shakedown trials that were very costly in terms of both time and dollars. Even though such empiricism might eventually be successful in practice, the company never knew why key interactions worked. By combining process science, physical constraints, and user environments in a single electronic model, experimenters can obtain a level of process insights they never had before. Most important, they can visualize and understand why things do (or do not) work. The model provides a reliable discipline for recalibrating people’s intuitions. And its visual and printed outputs help educate users to adopt the innovation faster and with better results.

Going beyond such internal virtual skunkworks, designers can also use software representations about the best available external suppliers’ capabilities to determine an optimum means of manufacture. By constantly surveying external best practices, they can determine the implicit costs of producing internally versus outsourcing. Their models can extend into virtual storerooms that optimize specification, sourcing, and logistics for future parts as models and features change.

By tapping into the best worldwide scientific and consumer knowledge bases, these systems substantially leverage the company’s other investments in its development teams’ skills and specialized facilities. The simulations and their updated databases become major contributors to the company’s learning capabilities. They continually capture, codify, and make available all accessible internal and external knowledge about a problem. The software’s capabilities are important assets in attracting key technical people and enabling them to attack challenges at the frontiers of their fields. Properly designed software systems allow smaller, more flexible teams to perform at greater levels of sophistication than larger teams can without them. Innovation costs decrease, and output values increase exponentially.

However, managers should be aware that software can also impose its own limitations on innovation. Unless they are cautious, the structure of the software will limit the databases investigated, options considered, manipulations available, and user data evaluated. They must keep all three (database, engine, and market) critical systems as updated, open, and flexible as possible. Any modularity (except in the ultimate modularity of a single datum) will introduce some constraints. The capacity of an enterprise to move from one technology’s “S” curve (or technical performance limits) to another’s may well depend on whether it has developed adequate transitional software to consider the next “S” curve’s characteristics and needs in its analyses. This often depends on the software’s capacity to handle sufficiently refined details about customers, internal systems, and external technical data.

The Smallest Replicable Units

To avoid such limitations — and to maintain needed flexibilities — successful innovation and software managers find it useful to break units of activity and information down to the “minimum replicable level” of detail for the tasks or data to be analyzed. In earlier years, the smallest replicable measuring unit for organizations and data might have been an individual part, subassembly, office, supplier, or customer class. As volumes increased and computer capabilities became greater, it often became feasible for the corporation to manage and measure critical performance variables at much more detailed, feature, activity, customer characteristic, or technical levels.

In some service industries — like banking, publishing, communications, structural design, entertainment, or medical research — it soon became possible to disaggre-gate the critical units of service activity into digitized sequences, electronic packets, data blocks, or bytes of information that could be endlessly combined or manipulated for new effects or to satisfy individual customer and operating needs. In manufacturing, the capacity to measure and control to ever more refined levels led to mass customization.21 In all industries, seeking out such micro-units enables the highest possible degree of segmentation, strategic fine-tuning, value-added definition, and cost control to help connect and target new innovations in the marketplace. Interestingly, the larger the organization is, the more refined these replicability units can be, and the greater their leverage for creating value-added.

Important to the success of American Airlines’ SABRE system, Motorola’s pagers, AT&T’s cellular telephones, the Human Genome project, and the Internet has been (1) their early definition of data breakdowns into the smallest repeatable units, and (2) the creation of database rules and interfaces that allowed endless variations of user combinations, types of experimentation, and production options. As object-oriented software becomes more widely available, the capture and use of such detailed information is becoming easier — as are the corresponding opportunities for innovating in software. Object orientation promises to accelerate and facilitate the kinds of end-to-end compatibility that the Internet now provides in the public access realm.

User Becomes Innovator

The Internet’s TCP/IP communications standards have made it possible for tens of millions of computers and their users to “talk” together and to innovate together. Using similar software, the number of people connected through intranets within companies is now growing even more rapidly than the Internet. Web-compatible software languages (Mosaic, Java, and HTML) that run well on many different personal computer architectures now provide a huge virtual disk drive of sources and uses for innovation. A Business Week article forecast, “These will cause a basic shift in the software business no less seismic than the fall of the Berlin Wall. . . . [They] will enable the de-construction and the construction of a new economic model for the software industry.”22 They provide a potent new model for interactive worldwide innovation, based on the combinative powers of the Internet’s millions of access points.

Using minimum replicable unit concepts, various network softwares (like Navigator, Java, and Yahoo) have become the mediating structures through which users can innovate their own solutions from a wide variety of alternatives. More powerful yet may be the instant diffusion that they allow for known technologies once posted on the Internet or the Web. Java creates a sixty-four-kilobyte software virtual computer, which can be placed inside most interconnecting devices including telephones and can make almost any personal computer into a multimedia machine. This should expand both innovation and diffusion possibilities for various new concepts, especially if Microsoft includes Java in its Windows offerings. If such capabilities become widely used, they will achieve the ultimate in decreasing innovation cycles, costs, risks, and diffusion times. The customer will become the innovator. Under these circumstances, adoption and adaptation times and risks for producers drop to zero. The software “applets” of a Java-like system will become the minimum replicable elements of effective distributed computing, while the network becomes both the computer itself and an instantaneous distribution system.

New business methodologies — like paying single-use fees for applet software or individual databases — seem likely to further revolutionize many businesses in such industries as software, distribution, publication, education, banking, communications, entertainment, and professional services. The lines between application, content, and support services may quickly disappear in many markets. The sheer variety of object-oriented, network-capable systems (like Visual Basic, OLE, Collabra, Taligent, and Java) seem likely to accelerate interactive innovation opportunities in most fields.

For example, in manufacturing, clothing designers no longer need to design their line in advance on a make-or-break basis. Instead, they can offer a series of suggested samples that salespeople show to potential buyers physically and electronically. Then, by working on an electronic palette, the salesperson and the buyer jointly sketch precisely what modifications the buyer wants. The palette can be connected directly to the design unit at the clothing manufacturer’s plant where professionals interact electronically with the retail buyer to detail and price the buyer’s exact desires. Virtually any product — from insurance policies and travel tours to pagers, bathroom fixtures, houses, automobiles, or yachts — can be interactively custom-designed to meet the specific and varying needs of niched markets or individuals throughout the world.

Managing Software-Based Innovation

To take proper advantage of such revolutionary opportunities, many companies will have to dramatically improve their own internal software management capabilities. The alternative may be oblivion. The ultimate goal is to develop and integrate the company’s three major subsystems — databases, engines, and market-environmental interfaces — to a high standard. On their way to achieving this, however, the company can garner very large payoffs by breaking its software processes down into four different groupings and managing each with tested techniques appropriate to that problem category.

Although software development is notoriously difficult to manage, the most innovative companies — whether in products or services — seem to converge on several approaches, each useful for a different strategic purpose.23 What characteristics do these approaches share? One commonality is that all simultaneously enable independent and interdependent innovation. And all involve interactive customer participation. New software, like most innovations, is first created in the mind of a highly skilled, motivated, and individualistic person, hence independent. But to be useful the software (or device it supports) usually must connect to other software (or hardware) systems and meet specific user needs, hence interdependent. Interesting innovation problems are generally so complex that they require high expertise from many “nonprogrammer” technical people and users for solution.24 How do successful companies achieve the needed balance between deep professional knowledge, creative individualism, coordinated integration, and customer participation?

Individual Inventor-Innovators

As in the physical sciences, knowledgeable independent inventors and small groups create the largest number of software innovations, particularly at the applications level. In essence, a few highly motivated individuals perceive an opportunity or need, assemble software resources from existing databases and systems, choose an interlinking language and architecture on which to work, and interactively design the program and subsystem steps to satisfy the need as they perceive it. Those who want to sell the software externally first find some real-life application or customer, consciously debug the software for that purpose, and then modify and upgrade it until it works in many users’ hands for various purposes.

Many important software innovations, from VisiCalc to Mosaic, started this way. Virtually all video game programs come into being in this fashion, as do new customized programs to solve local enterprises’ problems. Millions of inventor-innovators use largely trial-and-error methods to design new software for themselves, improve old systems, or create totally new effects. Like other small-company innovators, there is no evidence that the process is either efficient or consistent in form. Vision, expertise, and persistence are the most usual determinants of success.25 The sheer number of people trying to solve specific problems means that a large number of innovations prove useful in the marketplace, although a much greater number undoubtedly die along the way.

Larger companies have learned to harness individual software inventors’ capabilities in interesting ways. For example, as a corporate strategy, MCI has long encouraged individual inventor-entrepreneurs to come up with new software applications (fitting its system’s interfaces) to provide new services over its main communication lines. AT&T Bell Labs created UNIX to assist computer science research. AT&T later gave UNIX to universities and, eventually, to others, slowly realizing that as individuals created programs to provide local solutions or to interface with others, they would require more communications interconnections. UNIX was consciously designed to encourage individuals to interact broadly and to share their useful solutions with others. The “hooks” it provided later allowed AT&T to sell many more services than it could have possibly forecast or innovated internally.

Small Interactive Teams

In many of the larger “applications houses” — like Microsoft, Oracle, and Netscape — small, informal, interactive teams are the core of the innovative process. The complexity of these firms’ programs is too great for a single individual to develop them alone. In most cases, the target concept is new, discrete, and relatively limited in scope. Relying heavily on individual talents and personal interactions, these firms typically make little use of CASE tools or formalized “monitor programs” to manage development. They operate in a classic skunk works style, disciplined by the very software they are developing.

For example, Microsoft tries to develop its applications programs utilizing very small teams. Major programs typically begin when Bill Gates or a few of his designers agree to the performance parameters and the broad systems structures needed to ensure interfaces with other Microsoft programs or customer positioning. Overall program goals are broken down into a series of targets for smaller subsystems, each capable of being produced by a two- to five-person team that operates quite independently. Interfaces are controlled at several levels: “programmatic specifications” to make operating systems perform compatibly, “application interfaces” to interconnect component systems (like memory or file management), and “customer interfaces” to maintain user compatibility. Other than these, the original target functionalities, and time constraints, there are few rigidities. Detailed targets get changed constantly as teams find out what they can and can’t accomplish for one purpose and how that affects other subsystems.

Microsoft’s key coordinating mechanism is “build-test-drive.” At least every week — but more often two to three times per week — each group compiles its subsystem so the entire program can be run with all new code, functions, and features in place. In the “builds,” test suites created by independent “test designers” and the software itself become the disciplining agents. If teams do not correct errors at this point, interactions between components quickly become so vast that it is impossible to fit all program pieces together, even though each subsystem might work well alone. As soon as possible, the program team proposes a version for a specific (though limited) purpose, gives it to a customer to test, and monitors its use in detail. Once it works for that purpose, the program goes to other customers for beta tests in other uses. This approach both decreases developmental risks and takes advantage of customers’ suggestions and innovations.26

Monitor Programs

Such “informal” approaches serve particularly well for smaller freestanding or applications programs, although Microsoft has used them for larger operating systems. In most cases, designers of larger “operations” or “systems” software find some form of “monitor program” useful. These monitors establish the frameworks, checkpoints, and coordinating mechanisms to make sure all critical program elements are present, compatible, cross-checked, and properly sequenced. They allow larger enterprises to decentralize the actual writing of code among different divisions or locations while ensuring that all functions and components work properly together. No element is forgotten or left to chance, and interface standards are clearly enforced. Weapon systems, AT&T “long lines,” and Arthur Andersen have used this programming method successfully. Many firms have found that such formal monitors both lower the cost and increase the reliability of large-scale systems designs.

For example, Andersen Consulting usually must provide under contract both a unique solution for each customer’s problem and a thoroughly tested, fault-free systems product. For years, Andersen has combined a highly decentralized process for writing each section of the code with a rigorous centralized system for program coordination and control. Two tools called METHOD/1 and DESIGN/1 have been at the center of its process. METHOD/1 is a carefully designed, step-by-step methodology for modularizing and controlling all the steps needed to design any major systems program. At the highest level are roughly ten “phases,” each broken into approximately five “segments.” Below this are a similar number of “tasks” for each job and several “steps” for each task. METHOD/1 defines the exact elements the programmer needs to go through at that particular stage of the process and coordinates software design activities, estimated times, and costs for each step.

DESIGN/1 is a very elaborate computer-aided software engineering (CASE) tool. DESIGN/1 keeps track of all programming details as they develop and disciplines the programmer to define each element carefully. It governs relationships among all steps in the METHOD/1 flow chart to avoid losing data, entering infinite loops, using illegal data, and so on. In addition to ensuring that each step in METHOD/1 is carefully executed, it allows customers to enter “pseudo-data” or code so they can periodically test the “look and feel” of screen displays and to check data-entry formats for reasonableness and utility during development. The integrated METHOD/1 and DESIGN/1 environment is extremely complex, taking up some fifty megabytes on high-density diskettes. A dedicated team of specialists continually maintains and enhances these programs.27

Design to Requirements

The most common approach to developing operations software is neither as informal as Microsoft’s nor as formal as Andersen’s. The process tends to follow a general sequence:

  1. Establish goals and requirements (what functionalities, benefits, and performance standards are sought).
  2. Define the scope, boundaries, and exclusions from the system (the limits of the system).
  3. Establish priorities among key elements and performance requirements (what is needed, highly desired, wanted, acceptable in background, and dispensable if necessary).
  4. Define interrelationships (what data sets, field sizes, flow volumes, and cross-relationships are essential or desirable).
  5. Establish what constraints must be met (in terms of platforms, network typologies, costs, timing, and so on) in designing the system.
  6. Break the total problem down into smaller relatively independent subsystems.
  7. For each subsystem, set and monitor specific performance targets, interface standards, and timing-cost limits, using agreed-on software-test regimes and monitoring programs. Often the design software itself provides the ultimate documentation and discipline for all groups.

Because quite dissimilar skills may be needed for each step, different ad hoc teams typically work on the database system, the engine (or platform) system, and external interface systems. A separate interfunctional group (perhaps under a program manager) usually coordinates activities across divisions or subsystems. Using a combination of software and personalized performance scheduling and evaluation techniques, this group, along with independent test designers, ensures maintenance of task functionalities, component and subsystem performance, time frames, dependencies among tasks, output, and quality, and priorities. If the software under design has to support existing processes, successful cross-functional teams typically reengineer the processes first, then design the software prototypes while interactively engaging users throughout the full design and implementation process. Higher-level managers need to see that all these processes are in place and operate effectively.

Multiple Interactive Systems

Each design approach has been very useful for its specific innovative purposes. However, self-learning, multiple interactive (database, engine, and customer interface) systems are rapidly changing the entire nature of the discovery-innovation-diffusion process.

In many cases, software now learns from its own experiences and reprograms itself to find new optima. Using built-in decision criteria, the software constantly updates itself based on inputs from exogenous environments. Its learning systems may teach it to take actions directly — as learning-based chess, automated paper production, or stock-trading programs do. Or they may constantly monitor environments and signal humans or other systems to take needed actions when parameters approach learned limits — as in aircraft and nuclear plant emergency programs or in banking credit-card fraud prevention systems.

In operational applications, self-learning software has proved useful in many flow-process, micromanufacturing (semiconductor), health monitoring, and logistics system designs. It is used daily in retailing, financial, communications, chemical processing, and utility service monitoring systems and provides some of the most important problem and opportunity identification capabilities driving innovation in these fields. In both manufacturing and services, the key to responsive customer-based innovation is to break both operations and markets down into such compatible detail that managers can discern, by properly cross-matrixing their data, how a slight change in one arena can affect some critical aspect of performance in another. The ability to micromanage, target, and customize operations using the knowledge bases that size permits is fast becoming the critical scale economy and opportunity for value creation.

For example, General Mills Restaurant’s (GMR) sophisticated use of technology has helped it innovate a friendlier, more responsive atmosphere and lower competitive prices in its unique dinner house chains — Red Lobster, Olive Gardens, and Bennigan’s. At the strategic level, it taps into the most extensive disaggregated databases in the industry and uses conceptual mapping technologies to define precise unserved needs in the restaurant market. Using these inputs, a creative internal and external team of restaurateurs, chefs, and culinary institutes arrives at a few concept test designs. Using other models derived from its databases, GMR can pretest and project the nationwide impact of selected concepts and even define the specific neighborhoods most likely to support that concept. Other technologies combine to designate optimal restaurant siting and create the architectural designs most likely to be successful.

On an operations level, by mixing and matching in great detail the continuously collected performance data from its own operations and laboratory analyses, GMR can specify or select the best individual pieces and combinations of kitchen equipment for each location. It can optimize each facility’s layout to minimize personnel, walking distances, clean-up times, breakdowns, operations, and overhead costs. Once a restaurant is functioning, GMR has an integrated electronic point-of-sale and operations management system directly connected to headquarters computers for monitoring and analyzing daily operations and customer trends. An inventory, sales tracking, personnel, and logistics forecasting program automatically adjusts plans, measures performance, and controls staffing levels and products for holidays, times, seasonality, weather, special offers, and promotions. All of these lower innovation investments, cycle times, and risks.

At the logistics level, using one of the industry’s most sophisticated satellite, earth sensing, and database systems, GMR can forecast and track fisheries (and other food sources) worldwide. It can predict long- and short-term seafood yields, species availability, and prices, and can plan its menus, promotions, and purchases accordingly. It knows its processing needs in such detail that it teaches suppliers exactly how to size, cut, and pack fish for maximum market value and minimum handling costs to GMR, while achieving minimum waste and shipping costs for the supplier. Its software systems have allowed GMR to innovate in ways that others could not.

Conclusion

Software is and will be at the core of most innovation during the next several decades. The World Wide Web has already stirred up imaginative possibilities for a plethora of new markets, products, services, arts, and information potentials — all software-based. These will grow exponentially as more and more minds interconnect to utilize them. But startling as these prospects are, they provide only glimpses of the many opportunities that software innovation presents. When combined with software’s capacity to learn on its own, create new solutions, deal with inordinate complexities, shorten cycle times, lower costs, diminish risks, and uniquely enhance customer value, effective software management has now become the key to effective innovation for any company or institution. Innovators who recognize this fact will have a genuine competitive advantage. Managers who ignore this caveat do so at their companies’ peril.

Topics

References

1. Software is a set of instructions designed to modify the behavior of another entity or system. Although one can code molecules to modify pharmaceutical or chemical systems in a predictable fashion, it is primarily information technology software that is changing innovation processes. We will direct our discussion to the latter.

2. For the best single source of the traditional analytics for doing this, see:

P. Smith and R. Reinertsen, Developing Products in Half the Time (New York: Van Nostrand, 1992).

3. D. Weingarter, “Quarks by Computer,” Scientific American, volume 274, February 1996, pp. 116–120.

4. For classic studies of the process, see:

J. Jewkes, D. Sawers, and R. Stillerman, The Sources of Invention (New York: St. Martins Press, 1958);

Battelle Memorial Laboratories, “Science Technology, and Innovation” (Columbus, Ohio: Report to the National Science Foundation, 1973); and

J. Diebold, The Innovators: The Discoveries, Inventions, and Breakthroughs of Our Times (New York: Dutton, 1990).

5. For many thoroughly explained examples, see:

T. Steiner and D. Teixeria, Technology in Banking (Homewood, Illinois: Dow Jones-Irwin, 1990); and

National Research Council, Computer Science and Telecommunications Board, Information Technology in the Service Society (Washington, D.C.: National Academy Press, 1994).

6. For examples and details, see:

“IBM Attacks Backlog,” Computerworld, 11 October 1993, pp. 1, 7;

J. McHugh, “Trilogy Development Group,” Forbes, volume 157, 3 June 1996, pp. 122–128;

“Boeing Overhaul Taking Flight,” Information Week, 26 September 1994, p. 18; and

P. Anderson, “Conquest,” (Hanover, New Hampshire: Amos Tuck School, case, 1996).

7. J.B. Quinn and F.G. Hilmer, “Strategic Outsourcing,” Sloan Management Review, volume 35, Summer 1994, pp. 43–55.

8. Details on numerous examples appear in:

J.B. Quinn, Intelligent Enterprise (New York: Free Press, 1992).

9. Many innovative new organization forms depend heavily on software for their implementation. See:

J.B. Quinn, P. Anderson, and S. Finkelstein, “Managing Professional Intellect: Getting the Most Out of the Best,” Harvard Business Review, volume 74, March–April 1996, pp. 71–80.

10. J. Moore, “The Death of Competition,” Fortune, 15 April 1996, pp. 142–144.

11. P. Roussel, K. Saad, and T. Erickson, Third-Generation R&D (Boston: Arthur D. Little, Harvard Business School Press, 1991).

12. K. Sabbagh, The Twenty-First Century Jet (New York: Scribner, 1996).

13. J. Main, “Betting on the Twenty-First Century Jet,” Fortune, 20 April 1992, pp. 102–104, 108, 112, 116–117.

14. For a description of the electronic processes and interactions with other fields that molecular designs in biotechnology require, see:

B. Werth, The Billion-Dollar Molecule (New York: Touchstone Books, 1995).

15. “Looking for the Evidence in Medicine” (News and Comment), Science, 5 April 1996, pp. 22–24.

16. E. von Hippel, Sources of Innovation (New York: Oxford University Press, 1988).

17. P. Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday, 1994).

18. R. D’Aveni, Hypercompetition (New York: Free Press, 1994).

19. J.B. Quinn and M. Baily, “Information Technology: Increasing Productivity in Services,” Academy of Management Executive, volume 8, August 1994, pp. 28–51.

20. National Research Council found attempts to build such mega-systems among the most costly errors that large users had made in installing IT. See:

National Research Council (1994).

21. J. Pine, Mass Customization: The New Frontier in Business (Boston: Harvard Business School Press, 1993).

22. For an excellent overview of the generally available network software in early 1996, see:

“The Software Revolution,” Business Week, 4 December 1995, p. 78.

23. F. Brooks, The Mythical Man-Month (Reading, Massachusetts: Addison Wesley, 1975).

24. R. Moss Kanter, The Change Masters (New York: Simon & Schuster, 1983); and

J. Utterback, Mastering The Dynamics of Innovation (Boston: Harvard Business School Press, 1994).

25. J. Kotter and J. Heskett, Corporate Culture and Performance (New York: Free Press, 1992).

26. For a more detailed view of this process, see:

H. Mintzberg and J.B. Quinn, “Microsoft (B),” in The Strategy Process (New York: Prentice Hall, 1996).

27. For futher details on this process, see:

“Andersen Consulting (Europe),” in Mintzberg and Quinn (1996).

Reprint #:

3741

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.