Critical IT Issues: The Next Ten Years

Reading Time: 43 min 
Permissions and PDF Download

It’s a Monday morning in the year 2000. Executive Joanne Smith gets in her car and voice activates her remote telecommunications access workstation. She requests all voice and mail messages, open and pending, as well as her schedule for the day. Her workstation consolidates the items from home and office databases, and her “message ordering knowbot,” a program she has instructed, delivers the accumulated messages in the order she prefers. By the time Joanne gets to the office she has sent the necessary messages, revised her day’s schedule, and completed a to-do list for the week, all of which have been filed in her “virtual database” by her “personal organizer knowbot.”

The “virtual database” has made Joanne’s use of information technology (IT) much easier. No longer does she have to be concerned about the physical location of data. She is working on a large proposal for the Acme Corporation today, and although segments of the Acme file physically exist on her home database, her office database, and her company’s marketing database, she can access the data from her portable workstation, wherever she is. To help her manage this information resource, Joanne uses an information visualizer that enables her to create and manage dynamic relationships among data collections. This information visualizer has extended the windows metaphor (graphical user interface) of the early 1990s to three-dimensional graphic constructs.

Papers that predict the form of IT in the year 2000 and how it will affect people, organizations, and markets are in plentiful supply Scientific American has devoted a whole issue to this subject, describing how the computing and communications technologies of the year 2000 will profoundly change our institutions and the way we work.1 What is missing is a vision of what the IT function in a large organization must become in order to enable this progress. With some trepidation, we will attempt to fill this gap.

In the early 1980s, one of us published a paper that forecasted the IT environment in 1990.2 In this paper, we revisit those predictions and apply the same methodology to a view of the IT environment in the year 2000. We describe the fundamental technology and business assumptions that drive our predictions. Scenarios illustrate how the IT function will evolve in terms of applications, application architectures, application development, management of IT-based change, and economics. Finally, we highlight some key challenges in the next decade for the IT executive and other senior managers.

The 1980 Vision of Today

Table 1 shows to what degree the predictions made in 1980 were realized. The technology predictions tended to be too conservative, and the predictions that required organizational change tended to be too optimistic. They were as follows:

  1. The Rapid Spread of Workstations. Everyone who sits at a desk in a corporation will have a workstation by 1990. Workstations will be as common in the office as telephones. The cost of a supported workstation will be about 20 percent of a clerical’s salary and less than 10 percent of a professional’s salary:
  2. The User Interface. The distinction between office systems and end-user systems will disappear. The terminal will be ubiquitous and multifunctional, able to support end-user, data processing, and office systems tasks.
  3. The Distribution of Processing. Databases and processing power within the organization, which are relatively centralized today, will become much more distributed. This distribution will follow some basic rules. Data will be stored at a higher organizational level only if it needs to be integrated at that level. Applications at lower organizational levels will not rely on a staffed data center.
  4. IT Spending. IT spending will increase as a percent of revenue over the decade — by about 50 percent.
  5. Organization of the IT Function. IT management will be concerned with managing the demand for its services rather than rationing its supply The end user will dominate the use of computing resources. The primary value of the centralized IT function will be to provide interconnectability.
  6. Key Application Drivers. The 1980s will be a decade of integrating applications across functions. Organizational frameworks will be developed to encourage application integration across business functions.
  7. Application Development. All aspects of software will continue to improve steadily. However, the demand for software is so great as to appear infinite, and the improvement will be perceived as having little effect on the backlog.

As the figure shows, not all of these predictions were realized. The developments of the 1980s give us clues to how well we’ll progress in the next decade.

IT in the Year 2000 — Best-Case Scenario

Joanne’s computing environment represents a best-case scenario for the year 2000. The essential elements can be described as follows:

  • She has a hierarchy of powerful computing capabilities at her disposal: portable computer, home computer, office computer, and various organizational and information service computers.
  • All the stationary computers are physically interconnected to very high bandwidth public networks. This means they are linked through a medium — fiber optic cable — that allows large amounts of information to be communicated very quickly.
  • Advances in remote telecommunications access technologies allow her to access these resources without a physical connection.
  • She uses sophisticated interfaces that incorporate advanced ergonomic design concepts. That is, the computers are extremely user friendly; they have been designed to fit the way people actually work, even to fit the way individuals work.
  • Knowbots greatly simplify Joanne’s use of information technology. Knowbots are “programs designed by their users to travel through a network, inspecting and understanding similar kinds of information, regardless of the language or form in which they are expressed.”3 Among other functions, they provide the data she wants to look at in the order she wants to look at it.4

In short, the IT environment gives Joanne access to any information, anytime, anywhere, in any presentation mode she desires.

This scenario is technically feasible. All the elements exist either in commercial products or as prototypes. It is highly likely that a sizeable number of Joannes will exist in ten years, that is, that some key knowledge workers will have access to IT resources of the quality described. However, Joanne’s environment is not representative of the typical worker environment we expect for the year 2000. Many organizations will not have progressed so far. How far an organization progresses, and the benefits it obtains from doing so, will depend more upon its ability to identify appropriate strategic goals and to manage change than upon any technical factor. Table 2 summarizes our predictions for the year 2000.

The Driving Assumptions

The year 2000 scenario is based on several assumptions about technology. They are as follows:

Cost Performance Will Improve by Two Orders of Magnitude.

Since the 1960s, the core information technologies have shown cost-performance improvements of between 30 percent and 50 percent per year. If this trend continues through the 1990s, the cost-performance ratios of everything — memories, microprocessors, and so forth — will improve by two orders of magnitude, that is, by at least 100 times or 1,000 percent. Thus the workstation that can now process 25 million to 100 million instructions per second (MIPS) will be able to process from 500 MIPS to 2,000 MIPS. Instead of providing 10 million bytes of primary storage in your workstation, it will provide hundreds of millions of bytes of primary storage. And it will have billions of bytes of secondary storage attached, such as disk or optical memory.5 But this workstation will cost the same $10,000 in real dollars that today’s high-performance workstation costs.

These improvements are often indexed to labor costs. If we assume a modest increase in labor costs of 4 percent, the total IT cost-performance improvement relative to labor costs will be 2.5 orders of magnitude per decade.6 Because the cost performance of IT continues to improve relative to labor and other forms of capital, companies will continue to invest heavily in it. The lesson of the 1970s and 1980s was that IT was a superior investment when it could replace or augment other forms of capital. In addition, as the power of the technology increased, so did the range of its application to new business situations. These trends can only become stronger in the next decade.

All Computers Will Be Interconnected Via Very High Bandwidth Networks.

During the mid- to late-1990s, national and international telecommunications backbone networks that operate at a billion bits per second will be implemented. The initial funding for the first of these networks was assured by passage of the Gore bill in December 1991. The prototype is the National Research and Education Network (NREN) in the United States, which today operates at 45 million bits per second.7 In 1991 NREN for the first time allowed commercial enterprises to access the network.

Within offices, major computing elements will be interconnected with very high-speed local area networks (LAN). Homes will be connected to all of these networks with fiber optic cable, enabling people to work at home and to access a full range of entertainment and educational services. At home and in the office, portable computers will access databases using high-speed remote telecommunications technologies. While traveling, individuals will be connected to backbone networks and the desired databases by remote telecommunications technologies.

To summarize, fixed devices will be connected by fiber, and moving or movable devices will be connected by remote access technologies. Fiber will provide capabilities of up to a billion bits per second, and remote access technologies will provide between 10 million and 100 million bits per second.

A General Manager’s Glossary »

Client-Server Will Be the Standard Architectural Model.

By the year 2000, hardware configurations will almost universally follow the client-server model. In this model, the “client” or user operates a workstation that has a certain configuration of hardware and software, and a number of “servers” — mainframes, minicomputers, communications devices, very powerful workstations, novel printing devices, and servers that provide access to other networks — provide the client with supporting services. This model is already increasing in popularity, and it will dominate because of several key advantages:

  • it simplifies the separation of the user interface from the services provided to the user;
  • it eases functional separation of technology and of applications, thus simplifying growth and maintenance;
  • within its current range of application capability; installations are reporting savings of 25 percent to 50 percent over mainframe and minicomputer architectures; and
  • there is an ever-increasing quantity of software for client-server architectures. Consequently, applications will be distributed across several platforms. That is, programs will be able to share data easily; they will be interconnected and interoperable. Such issues will dominate technology purchase decisions. See Sidebar.

Standards for Interconnection and Interoperability Will Be Developed.

The current confusion in standards for interconnection and interoperability of hardware and software will be significantly improved by the year 2000. Vendors are inevitably coming to the conclusion that their markets will be severely constrained unless they make it substantially easier for users to interconnect and interoperate. (The 1982 paper thought that this realization would have occurred long ago.)

Although not ideal, the level of interconnection will be far superior to where it is today. The vast amount of compute power available will make the required conversions and translations as seamless as possible. Consequently, the user “wizardry” required today to build networks will be much less necessary.

What the final result will be is unclear. Today “open systems” is almost synonymous with the operating system Unix and the government standard version, Posix.8 By 2000, open systems solutions will more likely involve the adoption of key standards that enable a mix of environments to cooperate effectively Perhaps systems will coalesce around Unix, but we think this is unlikely because it is not in the interests of vendors to have one architecture dominant. Traditionally, organizations have sought stability in their technology investment by standardizing to a single family of operating systems. In the next decade, stability will be as or more dependent on standardizing to a single user interface and set of personal support tools.

It is likely that as the necessary standards are developed and accepted, the open solutions will be developed more quickly than proprietary architectures have been. Open solutions change the nature and quality of investment for organizations. Open systems deprive vendors of monopoly profits and create more competition in areas where standards take hold. In order to stay profitable, vendors will have to develop niche products that fit the open systems architecture, and they will have to concentrate on price-performance improvements. The benefit for vendors will be that so many users will be working within the open systems architecture that a much larger potential market will be accessible.

What to Expect

IT executives can expect the following in 2000:

  • The size of computer will not dictate the use of different application programs for the same task (i.e., there will be a high level of “scalability”). Each major vendor will sell a single computer architecture that spans from the desktop to the largest mainframes. An application programmer will only have to learn and use a single set of tools and standards.
  • There will be similar scalability for small- to moderately large-size applications across vendor architectures in the Unix open systems environment.
  • The highest performance systems, such as online reservation systems, will continue to operate within the large vendor architecture. Partly this reflects expectations about technology; in particular software robustness and the practicality of distributing massive databases. It also reflects conservatism in systems design and the high risk of making substantial shifts in architectures for core applications.
  • The choice of user interface will be independent of the choice of hardware. Next and Sun, for instance, are creating versions of their interfaces to run on IBM PCs.
  • Some services will become standardized and available across architectures, such as file transfer, document mail services, and so forth.
  • A more sophisticated and extensive market in outsourcing and leasing of resources will develop. Companies will be able not only to buy resources outright from corporations but also to pay on a usage basis for basic processing and telecommunications capacity and for software. A significant advantage of adopting and implementing an open or industry standard architecture will be the flexibility this provides for planning capacity and responding to changing demand, that is, the ability to outsource in demand-driven chunks.

What Not to Expect

  • IT executives cannot expect to distribute certain mission critical applications: the very high-volume, real-time, inventory management systems, such as airline reservation systems. The largest databases will still require proprietary (i.e., non-relational) solutions to meet performance needs.

Together, these assumptions describe a vast increase in compute power and telecommunications resources, which are made easier to use by wide adoption of client-server architectures and much improved standards for interconnection, display, and data sharing. James Emery, writing in MIS Quarterly, suggests that “these technical advances are rapidly moving us to the position of having a magic genie capable of satisfying almost any logically feasible information need.”9 The issue is to get the genie focused on critical business concerns.

Fundamental Business Drivers

Just as there are fundamental technology drivers, there are fundamental business drivers, and we need to understand how they will influence the IT function. Most articles on IT and business strategy provide a list of key business drivers. Although there are many, we have consolidated them into a few basic ones:

  • The restructuring of the industrial enterprise. This most important of drivers is referred to in many ways — business reengineering, the lean-production paradigm, the total quality company, and so on. What is consistent is that the traditional, mass-production stovepipe organization is adopting a leaner form of production, and the traditional managerial buffers of inventory, time, and staff are being ruthlessly eliminated.10
  • The globalization of business. By the year 2000, we will live in a global market society This is already true of the financial and automotive markets.
  • The changing global labor market.
  • The increasing volatility of business environments.

Organizations will have to continually test and refine applications and business processes in response to these changes. Companies will need to become tightly interconnected not only internally but also with suppliers and customers. The short supply of labor and skills will force organizations to design better business processes and systems, both within and between organizations, and to make use of extensive expert systems, knowbots, collaborative support, and other capabilities. Malone and Rockart describe how the textile industry, already advanced in integration, could become an electronic marketplace in which databases and consultants are available on-line, and specialty teams form instantly to solve the problem of the moment.11

More organizations will embrace the idea of the “informated” organization; that is, they will use their internal information to learn how to do their processes better.12 The informated organization shifts the locus of control from the manager to the worker, who is empowered by accessible information to exercise this control. Thus, organizations will rely to a much greater extent on the accessibility of information at all levels of the hierarchy. This will conflict with more traditional management processes and structures. The design and use of information systems will not be free of organizational politics as each company decides where and how it will compromise on the issues of managerial control and information accessibility.

In summary, these business changes suggest a sustained growth in new applications (for example, to replace the transaction systems of the stovepipe business, to empower workers in information-rich activities, and to help the less skilled) and a continuing high interest by senior managers in where and how the IS budget is spent.

Applications in 2000

It is difficult to predict the specific new applications that will be most important for the IT function. However, we can identify certain classes of applications and how they will change in the coming decade.

Application Types

To understand the evolution of applications in the next decade, it is useful to consider applications as falling into three categories:

  1. Business Operations Systems. These are the traditional core of the IT function; they have also been described as transaction and control systems. These systems can manage business processes that run in real-time, such as process control, and those that operate on weekly or monthly schedules, such as accounting and settlement systems.
  2. Information Repository Systems. These evolved somewhat later in the history of the IT function, as applications were built that isolated data from processing. Unlike transaction systems, the value and function of these systems is largely determined by the content and organization of the database rather than by the structure of the predesigned interactions.
  3. Personal Support Systems. These have evolved from the end-user support, timesharing systems of the late 1970s to more advanced support systems today. Their evolution has followed a path from personal task systems (e.g., word processing, spreadsheets, and simple databases) to database access and specialized support systems such as design support and executive managerial support. These higher-level support systems have often incorporated personal task systems and electronic mail capabilities. There is a growing belief that these will become collaborative workgroup support systems.

Application Architectures

IT groups charged with developing the information infrastructure have to develop policies and supporting tactical plans to migrate these three categories of applications from the existing base to the level of functionality and integration that we believe will be required in the year 2000.

Business Operations Systems.

IT executives need to understand that business operations systems will get larger and more complex in the coming years in order to respond to pressures to integrate internally and externally; to eliminate wasteful intermediaries, and to speed up business processes. Because of the enormous past investments in these systems, there will be an emphasis in design on building on current capabilities and creating more flexibility.

Organizations may find it advantageous for business operations systems to decompose into two subsets — back office operation and decision support. Consider the order entry process. In the new architecture, the back office component will automatically set up the order in the file, schedule it into production, and assign a delivery date for the customer. The decision support component will give a person the tools to negotiate with the customer regarding the order, terms and conditions, and delivery date. The back office processes will change much less frequently than the decision support processes, thus providing functional isolation and easier maintenance.

This organization cannot be seen as fixed. What is structured and transactional and what is unstructured and conversational change as we uncover the inherent structure in the process. This has implications for application design and for bringing knowledge-based systems into the application mainstream.

Further, the decision support component can “surround” the current data structures and transactions and can be built with little or no modification to them. A few companies have successfully implemented surround applications today. The technology to surround existing applications has been developed by small companies operating at the periphery of the major vendors. Some of these technologies are now being acquired by major vendors. They have considerable potential to manage the legacy problem (that is, figuring out how to deal with the legacy of old technology systems that do not fit current business requirements but that seem too expensive to redo) but they cannot be relied upon to make it go away.

We observe three trends in business operations systems:

  • IT executives will invest large sums of money in multimedia business operations systems. USAA’s image transaction system is an early example.13 These systems will be able to handle all forms of information used in business processes, from illustrations to voice acknowledgement. Programs that model the human working process, such as voice annotation, will become very important.
  • Systems will be designed to adapt more effectively to unforeseen changes in the business, the operating environment, and the organization. Traditionally; backup procedures have dealt only with failures, and we have thought of backup as a completely different mode of operation. This is no longer adequate. Given the level of change predicted for the future, backup will evolve into a proactive process that will ensure that systems be available at all times.
  • The nature of the legacy systems will have changed, but managing the retirement and replacement of mission critical systems will continue to be near the top of the IT management agenda.

Despite all of these changes, many operational systems developed in the next ten years will be organizational time bombs. Our dependence on information systems is continuing to grow faster than our ability to manage them, and in many organizations responding to immediate needs will divert resources and attention from identifying and implementing quality solutions consistently.

Information Repository Systems.

These will grow rapidly as the concept of the learning organization becomes operationalized. They will (1) be multimedia, (2) provide expert agent assistance (knowbots), (3) come in many levels of aggregation, including very fine line-item detail, and (4) be distributed to where the need for data access is highest. People will be able to define their own virtual repository in terms of other repositories and look to knowbots to find the data that interests them. As Zuboff suggested, much effort will be expended in deciding who has access rights to what data, a critical design and implementation issue.

Personal and Collaborative Support Systems.

We currently see several contradictory trends that must mutually coexist:

  • Support systems will become more segmented by specialty That is, software will include more of the intellectual content of tasks, in some cases as components are added to basic packages and in others as specialized applications are developed. For example, a will-writing program is more than just another word processor, but it can be created using one. Similar niche products will appear that are targeted at specific occupations and tasks.
  • Basic support capabilities will become standardized as standard user interfaces and modules become accepted. For example, electronic mail that integrates graphics, voice, and text will become ubiquitous.
  • The distinction between some types of support systems will blur over time. For example, managerial support systems are currently encompassing elements of executive support and decision support systems.
  • Desktop tools, which have lost their “virginal” simplicity and, in the race for product differentiation, have become an almost unmanageable menagerie, will have to be rethought to provide users with the truly flexible capabilities needed to do their work. Mark Weiser describes research at Xerox PARC that is trying to develop what he calls “ubiquitous computing … where you are unaware of it.”14
  • Collaborative work tools will become more important. The first generation of PCs led to development of significant new applications such as spreadsheets. Now that the norm is networked terminals, we should expect new applications that support teamwork to evolve. Already electronic mail, bulletin boards, and conferencing software provide a basic infrastructure for communication. Increasingly we will see software that allows people to work together collaboratively and interactively. Tools that allow two users to display and amend the same document simultaneously will be commonly available. More sophisticated applications may use technology related to “artificial reality” to enable groups to create, share, and manage very large amounts of information.

Integrating Systems with the Business

Currently we are in the third stage of a four-stage evolution of conceptual thinking in the IT function. Each stage is defined by what the IT function has to deliver in order to support the organization effectively They are as follows:

  1. Automation. Initially, application design was directed at automating existing manual systems. Progress could easily be measured by monitoring the systems portfolio. Masses of information were made available, but access was, and largely still is, exceedingly difficult. Much of the data was locked up in files accessed only by particular programs. Information could not be shared across applications or platforms. The dominant method of giving out information to its users was as line printer reports that found their most productive use as children’s drawing paper.
  2. Access to Information. Before automation could be completed, it became clear that we were better at collecting information than disseminating it. Since about 1970, the dominant concern of IT groups has been to reverse this trend. On-line systems replaced batch systems. PCs and workstations are no longer stand-alone devices. Data modeling and database management systems are enabling the integration of information at appropriate levels in order to support the organization’s information needs. The problem of providing secure information access is a dominant driver of IT investment today and will continue to be a major consideration.
  3. Filtering Information. Today; instead of being starved of information, managers and workers are in danger of dying from a surfeit of communication. The average information content of “information” is rapidly falling. When the number of electronic mail messages per day exceeds two hundred, they cease to attract attention. To stay productive, organizations are going to have to invest in the development of knowbots and other forms of active information filtering. If information access is a key driver for current investment, providing the right information filtering capabilities emerges as a major challenge.
  4. Modeling Information. When information is accessible and filtered, then the question must be asked, “What do I do with it?” Expert systems, modeling systems, and executive and managerial support systems all have a role to play in modeling information to make it more useful. The application of information models will require a proactive effort far beyond that required to order and filter data, but it will be necessary to ensure a good fit with business processes.

Information modeling cannot be managed without bringing together all three application segments — business operations, information repositories, and personal support systems. In addition, information modeling will require development of new models that integrate business process and systems design. Examples of these are Jay Forrester’s systems dynamics and Stafford Beer’s work on cybernetic models. These have been around for several decades, but they have not been brought into the IT mainstream. IT has developed mostly static models focused on describing system function and content, even though information systems are only one element in the total system of the organization. The implementation of large-scale applications and new technologies is not only technically complex, it can change the social and political structure of organizations. Yet often companies commit large sums to fund applications without understanding the full implications of their decision. We do not expect managers to continue to tolerate this level of risk, and what is accepted today as best practice will in the next few years come to be viewed as naive and unprofessional.

Managers need models that include a description of both structure and policy, thus enabling them to explore the implications of change. These models should be integrated with the organization’s operational systems. It will be a challenge for many IT organizations to service this need as it requires mastery of disciplines outside the compass of most IT professionals.

Client-Server Application Model.

This model, shown in Figure 1, integrates the three classes of applications described above. The support systems are in the workstations, and the business operations systems and information repositories are in the servers. The various clients and servers can communicate with each other through standard EDI-like transactions or object references.

This model represents a dramatic break from design concepts that were developed when the mainframe was the dominant information processing technology. IT has long advocated systems and information integration. Consequently it has searched for ever larger, more complex applications that will integrate everything into a single solution. The alternative — designing many smaller applications that can communicate and cooperate — has been considered too difficult, unstable, and probably inferior to a single solution. Client-server architecture is going to force a reevaluation of that trade-off To give an example: a company and supplier may integrate their complementary processes by communicating via EDI and electronic mail. Today, if the same two functions were within a single company, there would be a tendency to develop a single application with one integrated database. In the future we will reverse this process. Instead of trying to build as much logic as possible into large applications, we are going to break up applications into smaller distributable modules. Defined interfaces will be seen as opportunities, not as barriers. Distributed systems, when well designed, will be reliable and will allow system components to operate asynchronously. Methods of application design that are common in factory and process automation will become common for providing support to traditional business and management processes. Increasingly we will conceptualize information systems as communicating cells, each dependent upon the whole but capable of providing independent support for local tasks and operations.

The more advanced companies will achieve integration of the three types of applications through a client-server application model. However, at best this implementation will be only partially achieved by large organizations in the next ten years.

Distribution of Processing.

Computer processing will increasingly be distributed to where the work is done, subject to several constraints:

  • Systems will need to operate at relatively high-performance levels even when segments are down, including the center of the system. For example, when an airline computer reservation system is down at a major airport, the rest of the system must be protected.
  • Recovery will need to be nearly automatic after a breakdown. This is the concept of the self-healing process.
  • Recovery will need to restore high-value information first. Take a system monitoring a multinational portfolio of bank loans. Following a breakdown, the system will recapture information in order of importance, using criteria such as loan size, the risk of breaching the credit limit, and the borrower’s credit worthiness. Transactions will be posted in a sequence that updates the data most critical for decision making first. This implies that the recovery system “understands” the business’s needs.
  • The operational aspects of managing a data center, such as backup procedures, mounting of tapes, and so on, must be automated. Distributed computing on the scale we are discussing is viable only if operational costs are reduced through automation.

To summarize, the distribution of processing will be driven by the economics of computation and telecommunications and by the need to fit processing into organizational work patterns — none of which favor centralized processing. Integrity in large multilocation applications will still demand that master data and updating processes be stored at one or at most a few locations.

Application Development

Our systems today are like London’s phone system. When British Telecom (BT) considered replacing the existing analog exchanges in London with modern digital switches, it found the task almost impossible. During the bombing in World War II, engineers had gone out each day to repair the damage done the previous night. Working against time, they did whatever was needed to get service back. After the war, BT had no plans of the real network, only a map of how it was originally designed. Our current systems have the same relationship to their documentation. Usually, on the day systems are computerized, the documentation is already in error. From then on, through upgrades, maintenance, and emergency patches, we increasingly lose track of how the system operates. The result: critical systems that are expensive to maintain and impossible to replace.

Many companies are working intensively on modular design, reusable code, and client-server architectures, and these contributions are helping us to move away from this trap. Yet these depend upon a revolution in systems development practices.

Application models will contribute to this revolution. Unlike applications, which have the services and data structures embedded in them, application models can be used to generate the services and data infrastructure specific to an organization’s needs. Each “application” will have to conform to the same standards used by the applications with which it shares data. It will not be acceptable to make applications conform by modifying the source code.

PC applications running under common user interfaces such as Windows are already using this concept. They have a special installation routine that identifies other packages and builds appropriate links with them. With the development of multimedia applications and program-to-program protocols such as Dynamic Data Exchange (DDE) and Object Linking and Embedding (OLE) in Windows, and Publish and Subscribe in the Macintosh, these tools are going to become even more complex.

Generally these install routines work bottom up; each application tries to understand its environment and build what it needs. For enterprisewide, distributed applications, top-down models will be needed that understand how the applications are to work together. The key building blocks for these models are the information repositories and object request brokers that the major vendors are just beginning to deliver.15 This is a technology in its infancy, but these models will become key organizational resources for managing a diverse distributed resource.

Another important influence on systems development is computer-aided software engineering (CASE). CASE tools automate many of the necessary tasks that used to involve tinkering with the program code, such as linking data to process requirements. In the future, designers will develop systems using high-level modeling tools, from which code will be generated in one clean step. Systems will have self-documenting tools that automatically keep track of all changes. More important, all maintenance will be done by changing the design model, not the code. Even a failure of a critical production system will have to be rectified with fully auditable tools, without any loss of service. To reach these goals, we will need to adopt such technologies as tight modularization and dynamic linking so that changes can be made incrementally and almost immediately to the system as it operates. The ultimate test of success will be the retirement of all systems programmers’ tools that give direct access to physical memory or disk sectors and that act outside the normal security system.

CASE tools have been developed and marketed largely as productivity tools and mostly for systems professionals. Organizations often have not reaped the promised benefits because they have not understood the need to develop new processes for developing systems. Computer-aided engineering (CAE) has had a similar history: The real payoff from CAE became available only when organizations began to see its possibility for changing the relationship between design and manufacturing engineering. Implementing CASE, like implementing CAE, is a severe organizational change problem, and it can be successful only when senior executives are willing to pay the price that complex culture change entails. In a recent meeting of senior IT executives, nearly all conceded that there was little understanding among themselves, their management, their users, and their senior staff that CASE was as essential to their organizations’ success as was CAE.

Given where we are, the best we can expect is that the 1990s will be the decade of CASE the way the 1980s was the decade of CAE. If this is so, we can expect only moderate success in implementing CASE within very large organizations. However, there are examples of small to medium companies using the available tools to develop fairly complete portfolios of systems.16 These companies are the forerunners. They succeed because (1) they make a serious organizational commitment; (2) they either start with a clean slate, or they clearly identify ways to work around legacy systems; and (3) they understand that this is an iterative learning process that will challenge basic assumptions.

Despite the trends toward modular design, reusable code, and client-server architectures, larger systems will continue to be built, and they will be built faster and more accurately As this happens, it will become abundantly clear that accelerating the rate of technical implementation will only be possible if priority is given to managing the consequent changes in work and organization.

Managing Technology-Based Change

Research from the Massachusetts Institute of Technology Management in the 1990s Program suggests that the major reason that the benefits of IT implementation have been so slow in coming is that organizational change is not adequately managed.17 This thesis seems unarguable to most observers, but there is considerable skepticism that anything can be done about the problem. The reality is that progress must be made on this front if the IT executive is to succeed in this decade.

Successful implementation of systems has never been easy. Laudon comments, “Building an information system, … an on-line, distributed, integrated customer service system, … is generally not an exercise in ‘rationality.’ It is a statement of war or at the very least a threat to all interests that are currently involved in any way with customer service.”18 However, the problem will become more severe in this decade: the technology is allowing us to build ever larger and more complex systems, and supporting interdependent business processes will require those larger and more complex systems. Thus IT will continue to be involved in a change process that, at the same time, it makes more complex. IT complicates the change process in a number of ways: it moves the locus of knowledge and hence power in the organization, it changes the time dimension of processes and decisions, and it enables new organizational constructs to be implemented.

Consequently, organizational issues, resistance, and other change management problems become more pronounced. IT executives need to be aware that there is a body of literature and practice in organizational change that has been and can be applied to their problems and that they need to be the champion for technology-enabled change.19

Even with a commitment to change management, companies are likely to find that people’s inability to change, not technology, is the limiting factor in transforming organizations.

Economic Considerations

The IT executive will need to be aware of some important long-term economic considerations in devising strategies.

1. Technology will be increasingly cheaper and equally in available for all companies. More software will be available through retail and mail-order suppliers. Even Unix software will be available ready to install on most platforms using vendor-supplied install routines. The same economies in software development and merchandising currently enjoyed for the IBM PC and Apple Macintosh platforms will spread to the other major platforms. Thus, advantage will accrue to those companies that develop improved business processes and decision processes more effectively (cheaper, faster, and of higher quality) than their competitors.

In the year 2000, the cost differential in the acquisition of computer technology will be smaller than today. There will be fewer economies of scale available to larger companies. Also, in the race to increase the power of systems, the cost difference between competing platforms will tend to decrease. All workstations are getting cheaper at roughly the same pace. The absolute cost differential for any size machine will decrease. In addition, chips — the raw material — are likely to become more standardized and shared across product lines. In a commodity business there will be less advantage to selecting one vendor or another.

There will continue to be significant differences in how well companies implement technology and therefore in the benefits they achieve. How well a technology is used is a function of organizational learning. In this sense the choice of vendor will continue to remain crucial — not the hardware vendor, but the systems integrator consultant.

2. Companies will have to make major investments to complete their IT infrastructures and to keep them current. For example, the workstation population can be expected to turn over at least twice in the coming ten-year period owing to technology cost-performance improvements and the availability of new software. Consider a company that is roughly at maximum penetration of 1 workstation per employee with a total of 10,000 workstations. It would then have a minimum capital cost of 20,000 times at least $5,000 per workstation, or $100 million over the decade, irrespective of other infrastructure items. Facing up to the implications of infrastructure completion and reinvestment will not be easy.

IT Function in the Year 2000

There are an ample number of future predictions for the IT function.20 The IT function in the year 2000 will most probably continue its evolution as a hybrid — manager of infrastructure and staff advisor to senior executives and user organizations. As Dixon and John note, “IT manages the technology; and line executives manage the application of the technology through partnerships with IT.”21 Learning how to work effectively with all the stakeholders, including vendors, to accomplish the necessary changes will be a major task of the decade.

IT will retain a strategic role because it is the gatekeeper for introducing and integrating new technology and processes. The IT function’s critical knowledge, which is knowing how to navigate a course to technical integration, will evolve to become a mix of technical, business, organizational, and consulting skills.

Key Challenges for the Decade

The initial vision of the future in this paper was deliberately high tech. Most organizations will not be operating at that level, and the major challenge for IT executives will be helping their organizations exploit the technology opportunities.

Although the list could be quite long, we highlight a few key challenges for the IT executive in these interesting years to come:

  • Managing the evolving infrastructure — overseeing the movement to scalable client-server architectures, introducing exciting new enabling technologies, preserving current investments, generating capital to complete the infrastructure and revitalize it as it becomes obsolete, and learning how to operate a worldwide utility that ranks in complexity with moderate-sized telephone companies of today.
  • Managing infrastructure financing — deciding when to take advantage of outsourcing, resource leasing, and other techniques that give the organization access to scalable power on demand without compromising the organization’s development of competitive advantage technology.
  • Moving toward the new application architectures necessary to transform organizational business and decision processes — continuing to distribute function to where work is done, segmenting application logic along client and server lines, and so forth. Some solutions will come from vendors as they upgrade their systems planning and integration methodologies. The most important of these technologies will require the organization to develop its own models that describe its business processes and to link them to its technology systems. This information architecture will be the road map for the systems development process and the anchor for justifying IT investment. Without an understandable information architecture, IT will be unable to bridge the gulf between the new technologies and the business’s strategic directions.
  • Addressing the implications of managed organizational change both for CASE and for reengineered business processes.

CASE is moving rapidly from a future goal to a current critical success factor. The technology will continue to change rapidly This is going to put the IT organization under considerable stress. Current skills will become obsolete, and the cost structure of IT will be transformed. The senior IT executive has to manage a complete transformation of the function while ensuring quality support for customers. This will not happen through benign neglect. Active strategies for managing the institutionalization of CASE, prototyping the developing technology, and moving up the learning curve until the technology is absorbed by the IT and user organizations will require energy and new skills. Reengineered business processes are technologies that must be transferred into the organization, with similar implications for changes in skills and learning.

Largely missing in organizations today is a person to take responsibility for managing technology-driven organizational change, for learning what can be done and how to apply it, and for acting as a change champion. It may be that the success or viability of the IT organization will depend on how well it fills this vacuum.

  • Managing the new buy-versus-make paradigm. Each company has a history and a culture that make it more or less successful at using packages and at building applications from scratch. The quantity of technology now available and the increasing level of integration mean that most major applications will be hybrids. Successful companies will be those that manage integration most effectively and apply in-house resources to the tasks with the highest payoff

Overarching all these issues is the fact that no company is an island. As a web of networks develops and people begin to focus on linkages across and outside organizations, key standards will be developed that will come to define “open systems.” Successful IT managers will understand the standard development process and position their organization to benefit from other’s investments.

Surprises

This paper started with the hypothesis that it was possible to make reasonable predictions of the future of IT based upon a few long-term trends. But prospective futurologists are advised to consider the track record of their profession. In many ways the future is bound to surprise us. Yet we can guess where some of the surprises will come from. They are those areas where there is no useful track record or analog from the past.

Mobile MIPS.

With powerful portable workstations becoming commonplace, how will they be transformed? Will we see special purpose systems targeted to the needs of particular professions or modular designs with plug-in hardware for particular tasks? The ergonomics and economics of personal tools are still maturing.

We do not even pretend to guess the full consequences of the next generation of cellular laptops. Currently, the extra power is being devoted to better interfaces, pen and voice. Yet the cost-performance trend of the technology is such that there will be resources to do significant work. What will that be? Does this enable a new class of independent or franchised professionals who take industry-specific solutions to clients? How will schools integrate the use of portable knowledge bases in classes?

Data — Available, Accessible, and Interconnected.

The amount of data — text, numbers, pictures, voice, and video — in databases and accessible is going to explode. The universal data highways will bring a vast array of information to anyone who wishes to tap it. Yet access to information has always been a source of power and influence, and access to megadatabases will change relationships among individuals, organizations, and the state. As a society we are only beginning to understand the practices and ethics of data collection and management. The outcry over Lotus’s Marketplacetm system and current concerns about credit reporting systems are examples of the issues to be addressed and the stakes involved. At another level there are likely to be new classes of services and products. In a glut of data there will be a market for editors to sift, choose, compare, validate, and present information, whether those editors are knowbots or people.

Integration.

In combination, the mobility of computing and the availability of vast amounts of data will produce combinations and applications that are truly unpredictable today.

New Systems Development.

What effect will the new systems development tools have on the design of business processes? If we can build systems using flexible, adaptable, and innovative technology, what does that say for the way we change business processes? Are we going to see the end of the big application? Will it be replaced by iterative, evolutionary development of improved processes? Indeed, will the technology of systems development at last put business managers back in control of creating and managing the systems they use?

References

1. Scientific American, September 1991. This issue is devoted to a series of articles on how computers and telecommunications are changing the way we live and work.

2. R.I. Benjamin, “Information Technology in the 1990s: A Long-Range Planning Scenario,” MIS Quarterly, June 1982, pp. 11–31.

3. M.L. Dertouzos, “Communications, Computers, and Networks,” Scientific American, September 1991, pp. 30–37.

4. T.W. Malone, J. Yates, and R. Benjamin, “The Logic of Electronic Markets,” Harvard Business Review, May–June 1989, pp. 166–172.

5. “A Talk with INTEL,” Byte, April 1990, pp. 131–140.

6. J. Yates and R.I. Benjamin, “The Past and Present as a Window on the Future” in The Corporation of the 1990s, M.S. Scott Morton, ed. (New York: Oxford University Press, 1991) pp. 61–92.

7. V.G. Cerf, “Networks,” Scientific American, September 1991, pp. 42–51.

8. Unix, developed by Bell Labs in the early 1970s, is an “operating system, a religion, a political movement, and a mass of committees,” according to Peter Keen. “It has been a favorite operating system of technical experts . . . owing to is ‘portability’ across different operating environments and hardware, its support of ‘multitasking’ (running a number of different programs at the same time), and its building-block philosophy of systems development (building libraries of small ‘blocks’ from which complex systems can be built).” See

P.G.W. Keen, Every Manager’s Guide to Information Technology (Boston: Harvard Business School Press, 1991), pp. 156–157.

9. J.C. Emery, “Editor’s Comments,” MIS Quarterly, December 1991, pp. xxi–xxiii.

10. M.J. Piore and C.F. Sabel, The Second Industrial Divide: Possibilities for Prosperity (New York: Basic Books, 1984); and

J.P. Womack, D.T. Jones, and D. Roos, The Machine That Changed the World (New York: Rawson Associates, 1990).

11. T.W. Malone and J.F. Rockart, “Computers, Networks, and the Corporation, “Scientific American, September 1991, pp. 92–99.

12. S. Zuboff In the Age of the Smart Machine: The Future of Work and Power (New York: Basic Books, 1988).

13. “Billing Systems Improve Accuracy, Billing Cycle,” Modern Office Technology, February 1990; and

C.A. Plesums and R.W. Bartels, “Large-Scale Image Systems: USAA Case Study,” IBM Systems Journal 23 (1990): 343–355.

14. M. Weiser, “The Computer for the Twenty-First Century,” Scientific American, September 1991, pp. 66–75.

15. Object request brokers are technologies that allow the user to access programs developed by other companies or groups much as the telephone directory allows a user to speak with someone. These tools give more people access to pre-existing solutions. See:

H.M. Osher, “Object Request Brokers,” Byte, January 1991, p. 172.

16. K. Swanson, D. McComb, J. Smith, and D. McCubbrey, “The Application Software Factory: Applying Total Quality Techniques to Systems Development,” MIS Quarterly, December 1991, pp. 567–579.

17. M.S. Scott Morton, ed., The Corporation of the 1990s (New York: Oxford University Press, 1991), pp. 13–23.

18. K. Laudon, A General Model for Understanding the Relationship between Information Technology and Organizations (New York: New York University, Center for Research on Information Systems, January 1989).

19. See E.H. Schein, Innovative Cultures and Organizations (Cambridge, Massachusetts: MIT Sloan School of Management, Working Paper No. 88-064, November 1988); and

E.H. Schein, Planning and Managing Change (Cambridge, Massachusetts: MIT Sloan School of Management, Working Paper No. 88-056, October 1988).

20. J.F. Rockart and R. Benjamin, The Information Technology Function of the 1990s: A Unique Hybrid (Cambridge, Massachusetts: MIT Sloan School of Management, Center for Information Systems Research, Working Paper No. 225, June 1991); and

E.M. Von Simson, “The ‘Centrally Decentralized’ IS Organization,” Harvard Business Review, July–August 1990, p. 158–162.

21. P.J. Dixon and D.A. John, “Technology Issues Facing Corporate Management in the 1990s,” MIS Quarterly, September 1989, pp. 247–255.

Reprint #:

3341

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.