• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of westjmedLink to Publisher's site
West J Med. Jun 2000; 172(6): 393–396.
PMCID: PMC1070929

Human error

models and management

The problem of human error can be viewed in 2 ways: the person approach and the system approach. Each has its model of error causation, and each model gives rise to different philosophies of error management. Understanding these differences has important practical implications for coping with the ever-present risk of mishaps in clinical practice.


The long-standing and widespread tradition of the person approach focuses on the unsafe acts—errors and procedural violations—of people on the front line: nurses, physicians, surgeons, anesthetists, pharmacists, and the like. It views these unsafe acts as arising primarily from aberrant mental processes such as forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness. The associated countermeasures are directed mainly at reducing unwanted variability in human behavior.

These methods include poster campaigns that appeal to people's fear, writing another procedure (or adding to existing ones), disciplinary measures, threat of litigation, retraining, naming, blaming, and shaming. Followers of these approaches tend to treat errors as moral issues, assuming that bad things happen to bad people—what psychologists have called the “just-world hypothesis.”1


The basic premise in the system approach is that humans are fallible and errors are to be expected, even in the best organizations. Errors are seen as consequences rather than causes, having their origins not so much in the perversity of human nature as in “upstream” systemic factors. These include recurrent error traps in the workplace and the organizational processes that give rise to them.

Countermeasures are based on the assumption that although we cannot change the human condition, we can change the conditions under which humans work. A central idea is that of system defenses. All hazardous technologies possess barriers and safeguards. When an adverse event occurs, the important issue is not who blundered, but how and why the defenses failed.failed.??


The person approach remains the dominant tradition in medicine, as elsewhere. From some perspectives, it has much to commend it. Blaming individuals is emotionally more satisfying than targeting institutions. People are viewed as free agents capable of choosing between safe and unsafe modes of behavior. If something goes wrong, a person (or group) must have been responsible. Seeking as much as possible to uncouple a person's unsafe acts from any institutional responsibility is clearly in the interests of managers. It is also legally more convenient, at least in Britain.

Nevertheless, the person approach has serious shortcomings and is ill-suited to the medical domain. Indeed, continued adherence to this approach is likely to thwart the development of safer health care institutions. Although some unsafe acts in any sphere are egregious, most are not. In aviation maintenance—a hands-on activity similar in many respects to medical practice—about 90% of quality lapses were judged blameless.2

Effective risk management depends crucially on establishing a reporting culture.3 Without a detailed analysis of mishaps, incidents, near misses, and “free lessons,” we have no way of uncovering recurrent error traps or of knowing where the edge is until we fall over it. The complete absence of such a reporting culture within the Soviet Union contributed crucially to the Chernobyl disaster.4 Trust is a key element of a reporting culture, and this in turn, requires the existence of a just culture—one possessing a collective understanding of where the line should be drawn between blameless and blameworthy actions.5 Engineering a just culture is an essential early step in creating a safe culture.

Another serious weakness of the person approach is that by focusing on the individual origins of error, it isolates unsafe acts from their system context. As a result, 2 important features of human error tend to be overlooked. First, it is often the best people who make the worst mistakes—error is not the monopoly of an unfortunate few. Second, far from being random, mishaps tend to fall into recurrent patterns. The same set of circumstances can provoke similar errors, regardless of the people involved. The pursuit of greater safety is seriously impeded by an approach that does not seek out and remove the error-provoking properties within the system at large.


Defenses, barriers, and safeguards occupy a key position in the system approach. High-technology systems have many defensive layers: some are engineered (alarms, physical barriers, automatic shutdowns), others rely on people (surgeons, anesthetists, pilots, control room operators), and yet others depend on procedures and administrative controls. Their function is to protect potential victims and assets from local hazards. They are mostly effective at this, but there are always weaknesses.

In an ideal world, each defensive layer would be intact. In reality, they are more like slices of Swiss cheese, having many holes—although, unlike in the cheese, these holes are continually opening, shutting, and shifting their location. The presence of holes in any one “slice” does not normally cause a bad outcome. Usually this can happen only when the holes in many layers momentarily line up to permit a trajectory of accident opportunity—bringing hazards into damaging contact with victims (figure). The holes in the defenses arise for 2 reasons: active failures and latent conditions.

Nearly all adverse events involve a combination of these 2 sets of factors. Active failures are the unsafe acts committed by people who are in direct contact with the patient or system. They take a variety of forms: slips, lapses, fumbles, mistakes, and procedural violations.6 Active failures have a direct and usually short-lived effect on the integrity of the defenses. At Chernobyl, for example, the operators violated plant procedures and switched off successive safety systems, thus creating the immediate trigger for the catastrophic explosion in the core. Followers of the person approach often look no further for the causes of an adverse event once they have identified these proximal unsafe acts. But, as discussed later, virtually all such acts have a causal history.

Latent conditions are the inevitable “resident pathogens” within a system. They arise from decisions made by designers, builders, procedure writers, and top-level management. Such decisions may be mistaken, but they need not be. All such strategic decisions have the potential for introducing pathogens into the system. Latent conditions have 2 kinds of adverse effect: they can translate into error-provoking conditions within the workplace (for example, time pressure, understaffing, inadequate equipment, fatigue, and inexperience), and they can create long-lasting holes or weaknesses in the defenses (untrustworthy alarms and indicators, unworkable procedures, design and construction deficiencies). Latent conditions—as the term suggests—may lie dormant within the system for many years before they combine with active failures and local triggers to create an accident opportunity. Unlike active failures, whose specific forms are often hard to foresee, latent conditions can be identified and remedied before an adverse event occurs. Understanding this leads to proactive rather than reactive risk management. To use another analogy: active failures are like mosquitoes. They can be swatted one by one, but they still keep coming. The best remedies are to create more effective defenses and to drain the swamps in which they breed. The swamps, in this case, are the ever-present latent conditions.


In the past decade, researchers into human factors have been increasingly concerned with developing the tools for managing unsafe acts. Error management has 2 components: limiting the incidence of dangerous errors and— this will never be wholly effective—creating systems that are better able to tolerate the occurrence of errors and contain their damaging effects. Whereas followers of the person approach direct most of their management resources to trying to make individuals less fallible or wayward, adherents of the system approach strive for a comprehensive management program aimed at several targets: the person, the team, the task, the workplace, and the institution.3

High-reliability organizations—systems operating in hazardous conditions that have fewer adverse events—offer important models for what constitutes a resilient system. Such a system has intrinsic “safety health”; it is able to withstand its operational dangers and still achieve its objectives.


Just as medicine understands more about disease than health, so the safety sciences know more about what causes adverse events than about how they can best be avoided. In the past 15 years of so, a group of social scientists based mainly in Berkeley, California, and the University of Michigan at Ann Arbor has sought to redress this imbalance by studying safety successes in organizations rather than their infrequent but more conspicuous failures.7,8 These success stories involved nuclear aircraft carriers, air traffic control systems, and nuclear power plants (see box). Although such high-reliability organizations may seem remote from clinical practice, some of their defining cultural characteristics could be imported into the medical domain.

Most managers of traditional systems attribute human unreliability to unwanted variability and strive as far as possible to eliminate it. In high-reliability organizations, it is recognized that human variability in the shape of compensations and adaptations to changing events represents one of the system's most important safeguards. Reliability is “a dynamic nonevent.”7 It is dynamic because safety is preserved by timely human adjustments; it is a nonevent because successful outcomes rarely call attention to themselves.

High-reliability organizations can reconfigure themselves to suit local circumstances. In their routine mode, they are controlled in the conventional hierarchic manner. But in high-tempo or emergency situations, control shifts to the experts on the spot—as it often does in the medical domain. The organization reverts seamlessly to the routine control mode once the crisis has passed. Paradoxically, this flexibility arises in part from a military tradition—even civilian high-reliability organizations have a large proportion of ex-military staff. Military organizations tend to define their goals in an unambiguous way and, for these bursts of semiautonomous activity to be successful, it is essential that all the participants clearly understand and share these aspirations. Although high-reliability organizations expect and encourage variability of human action, they also work hard to maintain a consistent mindset of intelligent wariness.8

Perhaps the most important distinguishing feature of high-reliability organizations is their collective preoccupation with the possibility of failure. They expect to make errors and train their workforce to recognize and recover them. They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones. Instead of isolating failures, they generalize them. Instead of making local repairs, they look for system reforms.


High-reliability organizations are the prime examples of the system approach. They anticipate the worst and equip themselves to deal with it at all levels of the organization. It is hard, even unnatural, for individuals to remain uneasy over the long term, so their organizational culture takes on a profound importance. Individuals may forget to be afraid, but the culture of a high-reliability organization provides them with both the reminders and the tools to help them remember. For these organizations, the pursuit of safety is not so much about preventing isolated failures, either human or technical, as about making the system as robust as is practicable in the face of its human and operational hazards. High-reliability organizations are not immune to adverse events, but they have learned the knack of converting these occasional setbacks into enhanced resilience of the system.

Summary points

  • The problem of human fallibility has 2 approaches: the person and the system
  • The person approach focuses on the errors of individuals: forgetfulness, inattention, or moral weakness
  • The system approach concentrates on the conditions under which people work and tries to build defenses to avert errors or mitigate their effects
  • High-reliability organizations, which have fewer accidents, recognize that human variability is the approach to averting errors, but they work hard to focus that variability and are preoccupied with the possibility of failure

Figure 1
The Swiss cheese model of how defenses, barriers, and safeguards may be penetrated by an accident trajectory


Competing interests: None declared.

This article was originally published in BMJ 2000;320:768-770


1. Lerner MJ. The desire for justice and reactions to victims. In: McCauley J, Berkowitz L, eds. Altruism and Helping Behavior. New York: Academic Press; 1970.
2. Marx D. Discipline: the role of rule violations. Ground Effects 1997;2: 1-4.
3. Reason J. Managing the Risks of Organizational Accidents. Aldershot, UK: Ashgate; 1997.
4. Medvedev G. The Truth About Chernobyl. New York: Basic Books; 1991.
5. Marx D. Maintenance Error Causation. Washington, DC: Federal Aviation Authority Office of Aviation Medicine; 1999.
6. Reason J. Human Error. New York: Cambridge University Press; 1990.
7. Weick KE. Organizational culture as a source of high reliability. Calif Management Rev 1987;29: 112-127.
8. Weick KE, Sutcliffe KM, Obstfeld D. Organizing for high reliability: processes of collective mindfulness. Res Organizational Behav 1999;21: 23-81.

Articles from The Western Journal of Medicine are provided here courtesy of BMJ Group
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • PubMed
    PubMed citations for these articles

Recent Activity

  • Human error
    Human error
    The Western Journal of Medicine. Jun 2000; 172(6)393

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...