U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Henriksen K, Battles JB, Keyes MA, et al., editors. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 3: Performance and Tools). Rockville (MD): Agency for Healthcare Research and Quality (US); 2008 Aug.

Cover of Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 3: Performance and Tools)

Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 3: Performance and Tools).

Show details

Minding the Gaps: Creating Resilience in Health Care

, PhD, , MD, , PhD, , PhD, and , MD.

Author Information and Affiliations

Resilience is the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances so that it can sustain required operations, even after a major mishap or in the presence of continuous stress. As an emergent property of systems that is not tied to tallies of adverse events or estimates of their probability, resilience provides the means for organizations to target resource investments by integrating safety and productivity concerns. Resilience engineering (RE) can enable an organization to cope with and recover from unexpected developments, such as maintaining the ability to adapt when demands go beyond an organization’s customary operating boundary. Understanding resilience makes the difference between organizations that inadvertently create complexity and miss signals that risks are increasing, and those that can manage high-hazard processes well. We discuss two examples of resilience: the response of an emergency department staff to surges in patient volume and design improvements to the infusion device control/display interface.

Introduction

Economic pressures to make a system leaner can increase the complexity of interactions among its elements, tighten their coupling,1 and lead to a system that the slightest disruption can render dysfunctional. The current U.S. air transportation system is a case in point, as a snowstorm in the Northeast can disrupt air travel in the Southwest.

Current Notions of Health Care Safety

Standardization and automation are just a few of the current popular notions about how to improve safety and performance in health care. However, resources that appear to be superfluous in normal operations may have latent value that is realized during crises. Combined with economic pressures, initiatives that seek to simplify and lean down organizations actually whittle down reserves, buffers, and other undervalued resources. This makes it difficult for an organization to tap resources to meet new demands when they arrive.

Resilience engineering is a new approach to this problem that strives to identify and correctly value behaviors and resources that contribute to a system’s ability to respond to the unexpected. Put another way, efforts to lean down organizations risk suffering from what an economist would term “cost externalization.” For example, a coal-fired power plant does not have to pay for the environmental effect of the acid rain its emissions cause. In a similar manner, the resources that are needed for resilient adaptation may appear to be redundant. Eliminating those resources can be seen as savings, when in reality there may be unforeseen future costs. Resilience engineering attempts to identify and combat this sort of externalization.

The way we think about systems, system performance, and their outcomes evolves as new insights become available. Hollnagel2 has suggested that our understanding of adverse events and their causes evolves through time as we develop and use new ways of thinking about how accidents happen. In the 1960s, technology and equipment were often cited as the attributable cause of adverse events. Attributions to the human performance peaked over the past 40 years, while attributions to the organization have recently been on the increase.

Notions of what to do to improve health care follow such perceptions about systems. The vogue for process re-engineering, for example, reflects a bias toward attributing the cause for adverse events to the organization. Few of these notions, though, are based in scientific study. Lack of system knowledge in health care leaves it without the necessary tools to understand the deeper forces that mold daily operations.

Efforts to improve health care without a basis in science do more damage than good by making systems unable to change in response to circumstances—what Sarter, et al.,3 term “brittle.” For example, Ash, et al.,4 found that health care information technology systems that are intended to reduce errors can also foster them. In another instance, Perry, et al.,5 found that the introduction of tighter procedures that were intended to improve glycemic (blood sugar) monitoring ironically had the opposite result. In a further example, efforts to standardize between-shift handoffs6 clashed with the initiatives that clinicians had developed to cope with the complexity, variety, and uncertainty in their work domain.7 Such interventions are not benign; instead, they induce unforeseen outcomes. They waste time, attention, and resources that could be spent more productively. They also delay progress toward genuine improvement.

System-Level Safety

No system has infinite adaptive capacity. Patterns in the way that a system responds to disrupting events provide information about its limits and how the system behaves when events push it near to or over those boundaries.8 As a service sector, health care can be understood according to how it responds to changes in demand for output over time.9 Demand for care varies widely in volume and type. Resources that are available to respond to demand (e.g., clinicians, beds in acute care facilities, and time) are in limited supply and constrained in various ways. If demand exceeds the ability of a system, three kinds of response may be observed as Table 1 shows.

Table 1. Health care response to demand.

Table 1

Health care response to demand.

  1. Limited response, with rapid recovery, in which the system is designed to continue on at normal output levels. An emergency department (ED) that experiences a large influx of patients might increase throughput by recruiting additional resources, such as borrowing clinicians from other duties. While this kind of adaptation could not continue indefinitely, operations are able to continue because of it.
  2. Matched response with protracted recovery, in which the system meets increased demand, is degraded for a short time afterward, and then returns to normal output levels. The same ED faced with an extended flow of patients beyond its capacity might extend shifts, work double shifts, or call in ED clinicians who are post-call. After such a surge, it would take days until the staff could return to normal.
  3. Different demand from usual, calling for a different set or scale of resources that requires a sustained change to the system. The ED faced with a continual excess of patients might look for a different way to buffer demand. Noting that many of their patients’ symptoms resolve after an extended stay in the waiting room, management might agree to make an additional room available to place patients for observation. The change would require recruitment of further staff and facility resources, making it a sustained change.

Authentic improvements to clinical performance and to patient safety must rely on understanding the underlying forces that shape the work environment. For example, in Figure 1, Cook and Rasmussen10 demonstrate how health care organizations exist at an operating point (the circled dot symbol) within an envelope that is bounded by economic failure, unacceptable workload, and acceptable performance. Management exerts pressure to increase efficiency in order to avoid economic failure. Workers try to find a sustainable level of effort that is sufficient to accomplish tasks and avoid an unacceptable workload.

Figure 1. Influences on a system’s operating point.

Figure 1

Influences on a system’s operating point. Source: Cook and Rasmussen, 2005. Copyright © 2008 Richard Cook. Reproduced with permission.

Pressures to increase productivity and avoid excessive workload push the system operating point away from the boundaries of economic failure and work overload and towards unacceptable performance. Crossing the boundary of unacceptable performance results in an adverse outcome, or accident.

Organizations seek to create a boundary of operations that allows for variable performance without causing loss. However, gradients to reduce workload and improve efficiency continually push an organization’s operating point ever closer to the boundary. Understanding where one’s operating point is, relative to the margin, requires an organization to cultivate a keen awareness of its operations and variability in performance.

Effective organizations are constantly looking for signs that specify how the organization actually operates and to use this information to be better calibrated.11, 12 Studies of high- and low-reliability organizations have documented the problems created when organizations are poorly calibrated with respect to their operating point. Management that correctly understands the operations of any system would also be likely to correctly estimate how well its strategies would work when unforeseen challenges occur.

The remainder of this chapter defines resilience and resilience engineering. Two examples of resilience in health care are provided. The first shows how ED staff members create resilience through the strategies they employ in response to changes in demand for care. The second describes a concept for an infusion device interface, demonstrating how equipment design can improve resilience.

Health Care and Resilience

There is universal agreement that, in fundamental ways, the health care system is not working. This may be the result of large-scale changes to both health care needs and to the efforts to meet those changes that have had varying success.

The U.S. health care systems serve a population that has experienced a decline in acute conditions and a rise in chronic conditions, such as heart disease, human immunodeficiency virus (HIV), methicillin-resistant Staphylococcus aureus (MRSA), and drug-resistant tuberculosis. Chronic conditions tend to require more complex medical interventions. Those interventions, supported by technology, increase the risk of misadventures.13 Studies of safety in high-hazard sectors,14, 15 such as the military and aviation, typically address system-level issues that mold the nature of daily operations and account for success and failure. While some attention has been focused on health care at the systems level, most recent efforts engage safety at a lower level: process redesign or safety engineering.15, 16 This is due in large part to the lack of systems safety skills and knowledge in the field, as well as to conventions in health care about what constitutes acceptable scientific activity. It is also due to the practical convenience (if not expediency) of dealing with concrete issues one by one, instead of trying to understand the larger situation.

The cottage industry structure of the national health care delivery system results in what Reid, et al.,17 term “disconnected silos of function and specialization.” An estimated 60 million patients in the United States suffer from two or more chronic conditions and are particularly affected by this disconnection among clinical care specialties. Connectivity, integrated care, and coordination are inadequate nationwide at all stages of illness treatment.18 As evidence of this breakdown, Asch, et al.,19 polled 6,712 randomly selected patients who visited a medical office within a 2-year period in 12 metropolitan areas including Boston, Miami, and Seattle. Those patients received only 55 percent of recommended steps for top-quality care among 439 measures, ranging from common chronic and acute conditions to disease prevention.

Health experts blame the overall poor care in the United States on an overburdened, fragmented system that fails to keep close track of patients with an increasing number of multiple conditions.20 Such outcomes beg for an approach that speaks to these problems in a substantive, systematic manner.

Resilience. Resilience is the ability of systems to mount a robust response to unforeseen, unpredicted, and unexpected demands and to resume or even continue normal operations. As an emergent property of systems, resilience is not tied to tallies of adverse events or estimates of their probability. The notion of resilience frees safety research from hindsight bias21 by making it possible to understand how workers anticipate possible adverse outcomes and act in advance to avert them. This is what the U.S. Navy terms “being forehanded.”22

Health care seeks to provide a seamless continuum as the patient transitions among care providers from presentation to diagnosis to treatment and to followup. Gaps in the continuity of care threaten a patient’s well being and introduce the potential for adverse events.23 Gaps in care continuity are evidence that the health care system is unable to respond with sufficient output to meet demand. Whether, or how, a system responds to fill such gaps in care continuity indicate its resilience. Signs of gap-filling adaptations (e.g., clinician initiatives and improvements to equipment design) indicate classes of disruptions or demands and sources of resilience that are present to help accommodate demands for care.24, 25

Understanding resilience. Resilience provides the means for organizations to target resource investments by integrating safety and productivity concerns. Woods and Wreathall26 have proposed an approach to model resilience based on an analogy from the world of materials engineering: stress-strain. In a manner that is similar to traditional materials performance models, the approach uses the relationship between stress (the varying loads placed on a mechanical structure) and the resulting strain (how the structure stretches in response) in order to understand organizational response to strains.

Examination of the way a joint cognitive system of people and machines responds to different demands on work makes it possible to describe it. In other words, plot how a system stretches in response to changes in demands. One use of the stress-strain approach is to guide how organizations search for information and provide a means to integrate the results into an overall picture of changes in adaptive capacity. Wreathall27, 28 and Wreathall and Merritt29 have tried to select sets of indicators that map onto aspects of resilience. Such measures point to the onset of gaps in normal work practices as pressures grow and reveal where workers develop gap-filling adaptations to compensate. These indicators are chosen to reveal circumstances in which management may be unaware of such challenges, either in terms of changing demands or in terms of the need for workplace adaptations. They can also reveal situations in which management may be overconfident, and current plans may not suffice for the changing demand profile.

Resilience engineering (RE) is a recent development in risk assessment and system safety.30 RE accounts for the manner in which people at all levels of an organization can try to anticipate paths that might lead to failure, create and sustain strategies that are resistant to failure, and adjust tasks and activities to maintain margins in the face of pressure to do more and to do it faster.31 A resilient organization can anticipate, cope with, recover, and learn from unexpected activities and resources at the same time as they struggle to handle patient load. Making the deliberate decision to forego care for all but life-threatening illness is an example of what some practitioners have described as a “free fall.”35

The final class is qualitatively different from the previous three ordered classes:

  • 4. Health care organizations plan for but rarely experience catastrophic events such as mass casualties or natural disasters. These rare but significant occurrences require a complete reorganization of work in their wake. In the absence of an unambiguous external trigger, health care organizations are reluctant to shift to this fourth strategy.

The following real-life example shows how ED staff members employed multiple strategies that increased the resilience of their operations. Recently, at the start of the evening shift (15:00), the ED was boarding 43 patients; 28 of these patients filled the unit reserved for boarders; the remaining 15 were split among the acute care areas and the hallway. The use of the hallway as additional treatment space is an example of resilient adaptation at the departmental, as opposed to the individual, level. This procedure was first used several years earlier. By now, it had become part of normal operations, representing an organizational reconfiguration to establish a new equilibrium.

All four of the acute care unit’s critical care bays were filled with admitted patients on ventilators. The unit was approaching limits to seamless adaptation. As the shift change rounds began, the ED received a critically ill ambulance patient. Over the course of the next 4 hours, five more critically ill patients arrived and required ventilator support and other intensive measures. This was in addition to multiple additional patients, who were seriously, but not critically, ill (e.g., chest pain suggestive of heart attack).

All treatment spaces and all temporary spaces to hold stretchers were filled. The staff identified and employed additional resources. The unit ran out of stretchers and began to store incoming patients in chairs near the nursing station. Congestion was severe, making it physically difficult to move around in the treatment area. This was a particular problem when new critical patients arrived, as they needed to go to spaces outfitted with particular equipment for treatment. Patients who occupied those spaces had to be moved to other locations on very short notice. By this point, the staff could only deal with patients who had life-threatening illnesses. The staff later developments by maintaining its ability to adapt when demands go beyond the organization’s customary operating boundary. RE provides tools to manage safety by assessing changes in the adaptive capacity of an organization as it confronts disruptions, change, and pressures.

Examples of Resilience in Health Care

Two examples, drawn from actual work in the clinical setting, demonstrate principles of resilience in action: the response of an ED staff to surges in patient volume, and improvements to the design of equipment so that it performs as a “team player” among clinicians.

Emergency department response. New patient flows and hospital management responses to financial and other pressures have left EDs brittle; they are less able to respond resiliently when accumulating or cascading demands push their operation into the second ‘extra’ region.32 The system has to stretch in response to increasing demands to avoid an accumulation of gaps that would lead to a system failure. Individuals and groups make it possible for the ED to stretch by adjusting their strategies and recruiting resources to provide additional adaptive capacity. This stretching requires extra work, extra resources, and new strategies.

EDs are well-defined physical units in hospitals. Functionally, though, they are ill-defined, open systems. The ED workers’ physical span of control is limited to reasonably small distances—i.e., less than 100 feet. Very large EDs, such as the one discussed in this case, are typically subdivided by function into smaller units. For example, this ED is divided into five contiguous units, including trauma care, pediatric care, severe illness, and mild illness. The fifth unit is reserved simply to hold admitted patients (“boarders”) for whom no bed is available in the hospital. The event described here took place in the 5-bed trauma unit and the 21-bed acute care unit of the ED. Both units are physically adjacent and are generally staffed by separate groups of nurses and a set of physicians that generally flow back and forth between units.

As Table 2 shows, staff members in this ED use four adaptive strategies to cope with the different levels of challenge they face in their daily work:

Table 2. ED staff member strategies.

Table 2

ED staff member strategies.

  1. A routine day, in which the system is operating under usual conditions and practitioners describe as “run of the mill.” The system anticipates changes outside the routine and adapts in a way that is apparently seamless.
  2. As load and demands increase, a key individual recognizes system degradation and initiates adaptive responses. For example, practitioners identify and reorganize additional resources, such as buffering capacity, in order to manage the challenges and maintain performance at near normal levels. Adaptations in these two settings include readily available solutions to the expected, normal, and natural troubles that workers have learned by experience and word-of-mouth.33 For the most part, these adaptations are performed skillfully and unconsciously (almost invisibly).34 They are the usual solutions (e.g., putting admitted patients in the hallway to make room for new patients) that make it possible to contain the usual problems within a horizon of tractability.33
  3. Demands increase to the point that the required adaptations occur at the level of the whole department. In this extreme situation, the demands on the organization may cross the horizon of tractability. This challenges its ability to sustain operations and risks escalation to a breaking point. Practitioners have to recognize and anticipate the trend and reorganize described this situation as a feeling of “free fall”—i.e., a disorganized situation in which they did not know the numbers, types, or problems of the patients in their unit.

The crisis continued until approximately 22:00. By that time the staff felt they had finally gained control of the situation. They had regained a clear picture of which patients were present, where they were located, and at least a vague idea of the nature of their problems. The system had stabilized, and the staff could return to “run of the mill” operations. As far as is known, no adverse events were associated with this episode.

Here, conditions beyond the range of previous operating experience exceeded the horizon of tractability.34 The resources and coping strategies that would normally provide resilience against variation and the unexpected became exhausted. Workers are compelled to invent new strategies on the fly. They also were driven to make sacrifice decisions, abandoning lower-level goals in order to preserve higher ones and regain control of the situation.

Equipment and information system design. Complex equipment and information systems can also contribute to brittleness or resilience. Misperceptions about user-device interaction have substantial consequences for clinical work. Collections of complex electronic information devices occur in acute care, particularly in critical treatment areas, such as the intensive care unit (ICU) and ED. Information technology (IT) systems are often installed in an attempt to fix problems that are actually embedded in the social organization.36 Recent reports of failures37 due to unexpected results from automation surprises indicate that IT demonstrates brittle properties that result from poor understanding of the work settings they are intended to support. Opaque systems that offer poor feedback and low observability undermine resilience and increase brittleness. There is a need to create new visualizations that provide improved feedback and high observability to help people recognize when events challenge current plans in progress.

How can IT, including information systems38 and infusion devices,39 be created so they can adapt to the fluid, variable clinical health care work setting? In the context of research, design, and development, the role of design has the responsibility to connect the adaptive power of people as goal-directed agents to technologic capability.40 People actively manage the dynamic characteristics of their work place, drawing on a deep knowledge of their work domain to create and use artifacts.41 Cognitive artifacts42 take the form of physical items, such as order forms, checklists, schedules, and digital equivalents—e.g., the control and display interfaces for information systems and equipment. Artifacts embody only the essential elements of a work domain.43 This makes them useful to both understand44 work domains and to derive design guidance for the IT systems intended to support cognitive work. It is a design approach from the user to the system, not the other way around.

The creation of better equipment and information systems makes it easier for workers to anticipate future opportunities and problems ahead of time. How can IT systems be configured in order to support such an approach? Klein, et al.,45 have proposed 10 traits that IT systems need in order to participate in any highly adaptive human work domain. These are 10 challenges for automation to participate in joint activity—extended actions carried out by an ensemble of people who are coordinating with each other—that set a longer term agenda for IT system development. Six of those traits inform the following example of how IT can follow these principles in order to develop a more resilient infusion pump interface.

  1. Have the ability to adequately model other participants’ actions vis-à-vis the joint activity’s state and evolution. Be able to coherently manage mutual responsibilities and commitments to facilitate recovery from unanticipated problems
  2. Be mutually predictable. The mental act of seeing ahead, with the frequent practical implication of preparing for what will happen.
  3. Be directable. Be able to deliberately recognize and modify ones’ own actions as conditions and priorities change.
  4. Be able to make pertinent aspects of their status and intentions obvious to their teammates. Make targets, states, capacities, intentions, changes, and upcoming actions obvious.
  5. Have the ability to observe and interpret signals of status and intentions. Be able to signal and form models of teammates.
  6. Enable a collaborative approach.

Medical devices, such as infusion pumps, increasingly feature complex control and display interfaces. Even highly experienced clinicians who have used infusion devices for years get “lost in menuspace” when they perform even the simplest tasks.46 Most infusions in U.S. hospitals are now provided by infusion pumps,47 making this device the most widely used IT-controlled equipment in the acute care environment.

Microprocessor-based infusion devices are associated with significant clinical accidents, resulting in patient morbidity and mortality. Problems with current commercially available infusion devices arise from the complexity of clinical care and the need to handle complex infusion programming through a simple interface. This simplicity creates gaps in necessary knowledge about the state of a patient’s infusion. The disorientation and confusion it causes makes it difficult to adapt to changes in patient care. This is brittleness in action. Following a 5-year study of commercially available infusion devices, we have developed a concept that provides necessary features that current pumps lack.

The display concept in Figure 2 illustrates how an interface can provide information about device display and control through time, showing operating history, current state, and implications for the future. Including context information makes it possible to interpret device behavior in terms of its clinical use.

Figure 2. Infusion device interface supporting resilience.

Figure 2

Infusion device interface supporting resilience. Copyright © 2008 Cognitive Technologies Laboratory. Reproduced with Permission.

In this example, a pediatric patient is receiving an infusion of dextrose that was started at 08:07 and is programmed to be completed at 10:07. At this point (09:10) the infusion is about halfway completed. The display shows volume/time (rate) parameters, current and recent system status, and the expected course of the infusion if current program settings are maintained.

The device controls remain fixed in the display center, while the data scroll from right to left as time passes. The graphic representation makes it possible for clinicians to use pattern recognition to determine how infusions are programmed and progressing. Alphanumeric characters provide values for discrete variables that are necessary for accuracy.

As a predictive display, a clinician can recognize dose-limit errors that plague current infusion displays that are programmed using only numbers. Additional information (indicated by “i” symbols) can be displayed that coincides with the treatment timeline. For example, the “i” at the lower left indicates blood glucose test results that were reported at 08:06. This overlay of therapeutic activity with results makes it possible for the clinician to make more informed decisions about patient care.

The interface concept reflects many of the 10 traits identified by Klein, et al.,45 making it better suited to work jointly with clinicians. Making clinical and programming information explicit makes team coordination easier and prevents coordination breakdowns. Providing past, current, and anticipated states and making connections with related data, such as lab results, makes it easier to recover from unanticipated problems. Showing projected values helps clinicians see ahead and prepare for what will happen. Controls make it possible to explore contingencies before committing to a final decision. This enables the clinician to evaluate multiple options and make trade-off decisions. Integrating controls with displayed information makes it possible to deliberately assess and modify programmed infusion actions as conditions and priorities change. The combination of graphic and alphanumeric information makes pertinent aspects of the device target, status, capacities, programming intentions, and upcoming actions obvious to members of the clinical team. These are the kinds of observable and controllable traits that would improve IT support for health care.

Improving the compatibility between infusion pumps and work requirements is not a matter of fixing a particular aspect of a particular design, such as making type larger. Instead, it is a matter of developing a new approach to representations that aid the work of clinicians who perform infusions. A new design needs to follow the principles of Klein, et al., 45 to make the pump’s operation evident, demonstrate implications of current programming for the future, and make it possible for others (in addition to the clinician who programmed the pump) to make informed decisions in light of this information.

Conclusion

Current research on resilience seeks to clarify how resilience works, where it comes from, and what factors facilitate or impede it. These and other active steps can improve the ability of health care systems to respond adequately to increasing demands and to avoid an accumulation of discrete, well-intentioned adjustments that can detract from organizational efficiency and reliability. This makes the difference between organizations that inadvertently create complexity and miss signals that risks are increasing and those that can successfully manage high-hazard processes.

Acknowledgments

Dr. Nemeth’s research is supported by the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD.

References

1.
Perrow C. Normal accidents. Princeton, NJ: Princeton University Press; 1999.
2.
Hollnagel E. Barrier analysis and accident prevention. Aldershot, UK: Ashgate Publishing; 2004. p. 46.
3.
Sarter N, Woods D, Billings C. Automation surprises. In: Salvendy G, editor. Handbook of human factors and ergonomics. New York: John Wiley and Son; 1997. pp. 1926–43.
4.
Ash J, Berg M, Coiera E. Some unintended consequences of information technology in health care: The nature of patient care information system-related errors. J Am Med Inform Assoc. 2004;11:104–112. [PMC free article: PMC353015] [PubMed: 14633936]
5.
Perry S, McDonald S, Anderson B, et al. Ironies of improvement: Organizational factors undermining resilient performance in healthcare. In: Nemeth C, editor. Symposium on resilience in human systems. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics; Montreal. 2007 Oct 7–10; pp. 3413–3417.
6.
Critical access hospital and hospital national patient safety goals: 2E. Oakbrook Terrace, IL: The Joint Commission; 2006. [Accessed April 27, 2008]. Available at: www​.jointcommission.org​/GeneralPublic/NPSG/06_npsg_cah.htm.
7.
Nemeth C, Nunnally M, O’Connor M, et al. Regularly irregular: How groups reconcile cross-cutting agendas in healthcare. In: Nemeth C, editor. Second special issue on large scale coordination, cognition, technology and work. Vol. 9. 2007. pp. 139–148.
8.
Rasmussen J, Pejtersen AM, Goodstein L. Cognitive systems engineering. New York: Wiley; 1994.
9.
Cook R. A very simple resilience definition? (No!) Model? (No!) Example!. Presentation to 2nd International Symposium on Resilience Engineering; 2006 Nov 8–10; Juan Les Pins, FR. 2006.
10.
Cook R, Rasmussen J. Going solid: A model of system dynamics and consequences for patient safety. Qual Saf Health Care. 2005;14:130–134. [PMC free article: PMC1743994] [PubMed: 15805459]
11.
Weick KE, Sutcliffe KM, Obstfeld D. Organizing for high reliability: Processes of collective mindfulness. In: Sutton RI, Staw BM, editors. Research in organizational behavior. Vol. 21. Stamford, CT: JAI Press; 1999. pp. 81–123.
12.
Reason J. Managing the risks of organizational accidents. Brookfield, VT: Ashgate; 1997.
13.
Sutcliffe K, Vogus T. Organizing for resilience. In: Cameron KS, Dutton IE, Quinn RE, editors. Positive organizational scholarship. San Francisco: Berrett-Koehler; 2003. pp. 94–110.
14.
Fagerhaugh SY. Hazards in hospital care. San Francisco: Jossey-Bass; 1987. pp. 3–4.
15.
Pew R, Mavor AS, editors. Human system integration in the system development process: A new look. Washington, DC: National Academies Press; 2007.
16.
2008 National patient safety goals. Oakbrook Terrace, IL: The Joint Commission; [Accessed April 27, 2008]. Available at: www​.jointcommission.org​/PatientSafety/NationalPatientSafetyGoals/
17.
Reid P, Compton WD, Grossman JH, et al., editors. Building a better delivery system: A new engineering healthcare partnership. Washington, DC: National Academies Press; 2005. [PubMed: 20669457]
18.
Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357:608–613. [PubMed: 17687138]
19.
Asch S, Kerr E, Keesey J, et al. Who is at greatest risk for receiving poor-quality health care? N Engl J Med. 2006;354:1147–1156. [PubMed: 16540615]
20.
Donn J. Study: Most get mediocre health care. Associated Press; 15 Mar, 2006.
21.
Weick KE. Sensemaking in organizations. Thousand Oaks, CA: Sage Publications; 1995.
22.
Nemeth C. Being forehanded. Third International Conference on the Nature and Source of Human Error; 2002 Oct; Chicago. Washington, DC: U.S. Food and Drug Administration; 2002.
23.
Cook R, Render M, Woods D. Gaps in the continuity of care and progress on patient safety. Br Med J. 2000 Mar 18;320(7237):791–794. [PMC free article: PMC1117777] [PubMed: 10720370]
24.
Cook R, Nemeth C. Taking things in one’s stride: Cognitive features of two resilient performances. In: Hollnagel E, Woods DD, Leveson N, editors. Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate Publishing; 2006. pp. 205–220.
25.
Woods D, Cook R. Incidents – Markers of resilience or brittleness? In: Hollnagel E, Woods D, Leveson N, editors. Resilience engineering: concepts and precepts. Aldershot, UK: Ashgate Publishing; 2006. pp. 69–76.
26.
Woods DD, Wreathall J. Stress-strain plots as a basis for modeling organizational resilience. In: Hollnagel E, Nemeth C, Dekker S, editors. Resilience engineering: Remaining open to the possibility of failure. Ashgate studies in resilience engineering. Aldershot, UK: Ashgate Publishing; 2008. pp. 145–161.
27.
Wreathall J. Systemic safety assessment of production installations. World Congress: Safety of Modern Technical Systems; 2001 Sept; Saarbrucken, Germany. Cologne, Germany: TUV-Verlag Gmbh; 2001.
28.
Wreathall J. Properties of resilient organizations: An initial view. In: Hollnagel E, Woods DD, Leveson N, editors. Resilience engineering: concepts and precepts. Aldershot, UK: Ashgate; 2006. pp. 275–285.
29.
Wreathall J, Merritt AC. Managing human performance in the modern world: Developments in the US nuclear industry. In: Edkins G, Pfister P, editors. Innovation and consolidation in aviation. Aldershot, UK: Ashgate; 2003.
30.
Hollnagel E, Woods DD, Leveson N, editors. Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate Publishing; 2006.
31.
Woods D, Cook RI. Nine steps to move forward from error. Cogn Technol Work. 2002;4:137–144.
32.
Committee on the Fugure of Emergency Care in the United States. Hospital-based emergency care at the breaking point. Washington, DC: National Academies Press; 2006.
33.
Voss A, Procter R, Slack R, et al. Understanding and supporting dependability as ordinary action. In: Clarke K, Hardstone G, Rouncefield M, et al., editors. Trust in technology: A socio-technical perspective. Dordrecht, NL: Springer; 2006. pp. 195–216.
34.
Woods DD, Hollnagel E. Joint cognitive systems: Patterns in cognitive systems engineering. Boca Raton FL: Taylor & Francis; 2006.
35.
Wears RL, Perry SJ, Anders S, et al. Resilience in the emergency department. In: Hollnagel E, Nemeth C, Dekker S, editors. Resilience engineering: Remaining open to the possibility of failure. Ashgate studies in resilience engineering. Aldershot, UK: Ashgate Publishing; 2008. pp. 197–214.
36.
Wears R. Computers and clinical work-reply. Letters. JAMA. 2005;294:182–b. [PubMed: 16014591]
37.
Xia W, Lee G. Grasping the complexity of IS development projects. Commun ACM. 2004 May;47:68–74.
38.
Nemeth C, Cook R. Reliability or resilience: What does healthcare really need?. In: Dominguez C, editor. Symposium on high reliability in healthcare. Proc Human Factors and Ergonomics Society Annual Meeting; 2007 Oct 1–5; Baltimore MD. 2007. pp. 621–625.
39.
Nemeth C, Cook R. Healthcare IT as a source of resilience. In: Nemeth C, editor. Symposium on resilience in human systems; Proc of the Intl Conf on Systems, Man and Cybernetics; Montreal. 2007 Nov; pp. 3408–3412.
40.
Alexander C. A pattern language: Buildings, towns, and cities. New York: Oxford University Press; 1997.
41.
Blumer H. Symbolic interactionism: Perspective and method. Berkeley, CA: University of California Press; 1986. pp. 11–12.pp. 27–79.
42.
Hutchins E. Cognitive artifacts. MIT COGNET; 2002. Available at: cognet​.mit.edu/MITECS/Entry/hutchins. [Subscription required]
43.
Nemeth C, O’Connor M, Klock PA, et al. Discovering healthcare cognition: The use of cognitive artifacts to reveal cognitive work. In: Lipshitz R, editor. Special issue on naturalistic decision making. organization studies. Vol. 27. 2006. pp. 1011–1035.
44.
Xiao Y, Lasome C, Moss J, et al. Cognitive properties of a whiteboard: A case study in a trauma centre. In: Prinz W, Jarke M, Rogers Y, et al., editors. Proceedings of the Seventh European Conference on Computer-Supported Cooperative Work; 2001 Sept 16–20; Bonn, Germany. Dordrecht, The Netherlands: Kluwer Academic Publishers; 2001. pp. 259–278.
45.
Klein G, Woods DD, Bradshaw JM, et al. Ten challenges for making automation a team player in joint human-agent activity. IEEE Intell Syst. 2004;19:91–95.
46.
Nunnally M, Nemeth C, Brunetti V, et al. Lost in menu space: User interactions with complex medical devices. In: Nemeth C, Cook R, Woods DD, editors. Special issue on studies in healthcare technical work. IEEE Trans Syst Man Cybern-Part A. Vol. 34. 2004. pp. 736–742.
47.
Hunt-Smith J, Donaghy A, Leslie K, et al. Safety and efficacy of target controlled infusion (diprifusor) vs. manually controlled infusion of propofol for anesthesia. Anaesth Intensive Care. 1999;27:260–264. [PubMed: 10389558]

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this page (550K)

Other titles in this collection

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...