Next time you read that some industrial accident or disaster was caused by a worker error, be a little skeptical. That’s because, in such situations, “”if there are human operators in the system, they are most likely to be blamed for an accident,” observes Nancy Leveson, MIT Professor of Aeronautics and Astronautics and Professor of Systems Engineering, in a new book she has been writing and posting online as a “living book” that she updates. For example, in one seven-year period in the late nineteenth century, about 16,000 thousand railroad workers were killed and 170,000 were disabled while coupling trains — a phenomenon the railroad managers simply attributed to worker errors. Finally, the practice of having workers do this task by hand was outlawed by the government — and the fatality rate dropped accordingly.
What’s more, Leveson, who is an expert on accident prevention, thinks traditional thinking about the causes of industrial accidents is limiting, in that the model used is that of chains of events leading back to the cause or the accident. (Think of a row of dominoes, cascading.) In designing today’s complex, automated systems, she believes that a better model involves thinking of reasons why accidents occur rather than specific causes. And companies should think in terms of developing systems that impose appropriate safety constraints. Such constraints should both prevent unsafe behaviors and ensure that, as the system evolves, it evolves only in ways that are safe.