Information Failures and Organizational Disasters

INTELLIGENCE: RESEARCH BRIEF: Vigilance is the key to avoiding potential organizational nightmares.

Reading Time: 12 min 

Topics

Permissions and PDF Download

In September 2004, Merck & Co. Inc. initiated the largest prescription drug withdrawal in history. After more than 80 million patients had taken Vioxx for arthritis pain since 1999, the company withdrew the drug because of an increased risk of heart attack and stroke. As early as 2000, however, theNew England Journal of Medicine published the results of a Merck trial which showed that patients taking Vioxx were four times as likely to have a heart attack or stroke as patients taking naproxen, a competing drug. Yet Merck kept the product on the market for four more years, despite mounting evidence from a variety of sources that use of Vioxx was problematic. Not until 2004, when Merck was testing whether Vioxx could be used to treat an unrelated disease, did Merck decide to withdraw Vioxx, after an external panel overseeing the clinical trial recommended stopping it because patients on the drug were twice as likely to have a heart attack or stroke as those on a placebo.

Merck’s voluntary withdrawal of Vioxx is emblematic of how most organizational disasters incubate over long gestation periods, during which errors and warning signs build up. While these signals become painfully clear in hindsight, the challenge for organizations is to develop the capability to recognize and treat these precursor conditionsbefore they tailspin into failure. Research on the topic provides a theoretical basis to explain why such disasters happen and highlights information practices that can reduce the risk of catastrophic failure.

Why Catastrophic Accidents Happen

While human error often precipitates an accident or crisis in an organization, focusing on human error alone misses the systemic contexts in which accidents occur and can happen again in the future (Reason, 1997). Perrow’s Normal Accident Theory (1999) maintains that accidents and disasters are inevitable in complex, tightly coupled technological systems, such as nuclear power plants. In his theory, complex systems show that unexpected interactions between independent failures and tight coupling between subsystems propagate and escalate initial failures into a general breakdown —a combination that makes accidents seem inevitable or “normal.” Rasmussen (1997) argues that accidents can happen when work practices drift or migrate under the influence of two sets of forces: The desire to complete work with a minimum expenditure of mental and physical energy (moving work practices toward the least effort) and management pressure (which moves work practices toward a minimum expenditure of resources). The combined effect is that work practices drift toward — and perhaps beyond — the boundary of safety.

Can Organizational Disasters Be Foreseen?

The surprising answer is yes. For example, Turner and Pidgeon (1997) analyzed 84 official accident reports published by the British government over an 11-year period, discovering that disasters develop over long incubation periods, during which important warning signals fail to be noticed “because of erroneous assumptions on the part of those who might have noticed them; because there were information handling difficulties; because of a cultural lag in precautions; or because those concerned were reluctant to take notice of events which signaled a disastrous outcome.” Three types of information problems are crucial to understanding why these signals are often ignored or disregarded.

Signals are not seen as warnings because they are consistent with organizational beliefs and aspirations. During the 1990s, Enron Corp. created an online trading business that bought and sold contracts for energy products, believing that success required it to have access to significant lines of credit in order to settle its contracts and to reduce large fluctuations in its earnings that affected its credit ratings. To address these financial needs, Enron developed a number of practices, including “prepays,” an “asset light” strategy and the “monetizing” of its assets. Because finding parties that were willing to invest in Enron assets and bear the significant risks involved was difficult, Enron began to sell or syndicate its assets — not to independent third parties but to “unconsolidated affiliates” (U.S. Senate, 2002). These affiliates were not on Enron’s financial statements but were so closely associated with the company that their assets were considered part of Enron’s own holdings. When warning signals about these questionable methods began to appear, board members were not worried because they saw these practices as part of the way of doing business at Enron. In the end, the board knowingly allowed Enron to move at least $27 billion (or almost half its assets) off the balance sheet, thus precipitating the decline and the eventual collapse of the energy giant.

Warning signals are noticed but those concerned do not act on them. In February 1995, one of England’s oldest merchant banks was bankrupted by $1 billion of unauthorized trading losses. The Bank of England report on the collapse of Barings Bank concluded that “a number of warning signs were present” but that “individuals in a number of different departments failed to face up to, or follow up on, identified problems” (Great Britain Board of Banking Supervision and George, 1995). In mid-1994, an internal audit of Baring Futures (Singapore) Pte. Ltd. sent to company executives reported as unsatisfactory that Nick Leeson was in charge of both the front office and back office at BFS, recommending a separation of the two roles. Yet by February 1995, nothing was done to segregate duties at BFS. In January 1995, the Singapore International Monetary Exchange alerted BFS about a possible violation of SIMEX rules, but still there were no follow-up investigations into these concerns. During all this time, Barings in London continued to fund the trading of BFS and senior management continued to act on these requests without question, even as the level of funding increased and the lack of information persisted. Trading losses ballooned quickly and the insolvent bank was sold for £1 in March 1995. See sidebar

Referenced Research »

Groups have partial information and interpretations, and no one has a view of the situation as a whole. In August 2000, Bridgestone/Firestone Inc. announced a recall of more than 6.5 million tires, mostly mounted on Ford Explorers, because of accidents caused by tire treads separating from the tire cores. In mid-1998, however, an insurance firm had already informed the National Highway Traffic Safety Administration of a pattern of tread separation in Firestone ATX tires. Later that year, Ford Motor Co. noted problems of tread separation in Firestone tires on Explorers in Venezuela, and in 1999, Ford replaced tires on Explorers sold in Saudi Arabia affected by tread-separation problems. Ford and Firestone began to blame each other as outside safety concerns about Explorer tires intensified. In May 2000, the NHTSA launched a formal investigation into alleged tread separation on Fire-stone tires. Three months later Firestone announced the recall.

Guarding Against Catastrophic Failure

There are strategies that an organization can adopt to raise its information vigilance (Choo, in press; MacIntosh-Murray and Choo, in press). At the individual level, people in organizations should be aware of biases in how information is used to make judgments. For example, Kahneman and Tversky (2000) found that how a situation is framed can affect the perception of risk. An executive choosing between options framed as possible gains would go for an alternative that offers the more certain gain over more risk. However, when the same options are framed in terms of possible losses, the executive would then select the riskier option, hoping to reduce losses. Research has also identified other information biases that are common among business executives: They prefer information that confirms their actions and abilities; they are apt to feel overconfident about their judgment; and they tend to be unrealistically optimistic (Lovallo and Kahneman, 2003).

When a course of action has gone very wrong, and objective information indicates that withdrawal is necessary to avoid further losses, many executives nevertheless persist, often pouring in additional resources (Staw and Ross, 1987). Although past decisions are sunk costs that are irrecoverable (Arkes and Blumer, 1985), they still weigh heavily on the consciousness of executives, often because of a reluctance to admit errors to themselves or to others. If facts challenge the viability of a project, executives often find reasons to discredit the information. If the information is ambiguous, they may select favorable facts that support the project. Culturally, persistence is associated with strong leaders who stay the course, and withdrawal is often viewed as a sign of weakness. How can executives know if they have crossed the line between determination and over-commitment? Staw and Ross (1987) suggest they ask themselves these questions: Do I have trouble defining what would constitute failure? Would failure in this project radically change the way I think of myself as a manager? If I took over this job for the first time today and found this project going on, would I want to get rid of it?

At the group level, the group’s ability to make risky decisions can be compromised by groupthink and group polarization. Groupthink occurs when members hide or discount information in order to preserve group cohesiveness (Janis, 1982). The group overestimates its ability and morality, closes its mind to contradictory information and applies pressure to maintain conformity. Group polarization happens when a group collectively makes a decision that is more risky than what each member would have done individually (Stoner, 1968).

Groupthink and group polarization can be controlled. To overcome conformist tendencies the leader should be impartial, avoid stating preferences at the outset and create a group environment that encourages the frank exchange of dissimilar views. To counter close-mindedness, the group should actively seek information from outside experts, including those who can challenge the group’s core views. The group could divide into multiple subgroups that work on the same problem with different assumptions. One member could play the role of devil’s advocate, looking out for missing information, doubtful assumptions and flawed reasoning.

At the organizational level, companies need to cultivate an information culture that can not only recognize and respond to unexpected warning signals but also enable it to contain or recover from incipient errors. Research on high-reliability organizations (such as nuclear aircraft carriers and hospital emergency departments that do risky work but remain relatively accident-free) reveal that these organizations depend on a culture of collective mindfulness (Weick and Sutcliffe, 2001). Such organizations observe five information priorities: They are preoccupied with the possibility of failure and so encourage error reporting, analyze experiences of near misses and resist complacency. They seek a complete and nuanced picture of any difficult situation. They are attentive to operations at the front line, so that they can notice anomalies early while they are still tractable and can be isolated. They develop capabilities to detect, contain and bounce back from errors, creating a commitment to resilience. Finally, they push decision-making authority to people with the most expertise, regardless of their rank.

Ultimately, a vigilant information culture is a continuing set of conversations and reflections about safety and risk, backed up by the requisite imagination and political will to act. Perhaps the most important condition of an alert information culture is that senior management needs to make vigilance and safety an organizational priority. Where there is a fundamental understanding that serious errors are a realistic threat, then there can be the resolve to search for, and then treat, the precursor conditions. Reason (1997) has pointed out that constant unease is the price of vigilance, or as Andrew Grove of Intel Corp. so famously observed, “Only the paranoid survive.”

Topics

Reprint #:

46303

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.