U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academy of Engineering (US) and Institute of Medicine (US) Committee on Engineering and the Health Care System; Reid PP, Compton WD, Grossman JH, et al., editors. Building a Better Delivery System: A New Engineering/Health Care Partnership. Washington (DC): National Academies Press (US); 2005.

Cover of Building a Better Delivery System

Building a Better Delivery System: A New Engineering/Health Care Partnership.

Show details

3The Tools of Systems Engineering

An understanding of the performance of large-scale systems must be based on an understanding of the performance of each element in the system and interactions among these elements. Thus, understanding a large, disaggregated system such as the health care delivery system with its multitude of individual parts, including patients with various medical conditions, physicians, clinics, hospitals, pharmacies, rehabilitation services, home nurses, and many more, can be daunting. To add to the complexity of improving this system, different stakeholders have different performance measures. Patients expect safe, effective treatment to be available as needed at an affordable cost. Health care provider organizations want the most efficient use of personnel and physical resources at the lowest cost. Health care providers want to serve patients effectively and minimize, or at least reduce, the time devoted to other tasks and obligations. Advancing all six of the IOM quality aims for the twenty-first century health care system—safety, effectiveness, timeliness, patient-centeredness, efficiency, and equity—will require understanding the needs and performance measures of all stakeholders and making necessary trade-offs among them (Hollnagel et al., 2005).

Understanding interactions and making trade-offs in such a complex system is difficult, sometimes even impossible, without mathematical tools, many of them based on operations research, a discipline that evolved during World War II when mathematicians, physicists, and statisticians were asked to solve complex operational problems. Since then, these tools have been used to create highly reliable, safe, efficient, customer-focused systems in transportation, manufacturing, telecommunications, and finance. Based on these and other experiences, the committee believes that they can also be used to improve the health care sector (McDonough et al., 2004). Indeed, improvements in health care quality and productivity have already been demonstrated on a limited scale in isolated elements at all four levels of the health care system (patient, care team, organization, and environment). These limited, but encouraging, first steps led the committee to conclude that the effective, widespread use of these tools could lead to significant improvements in the quality of care and increases in productivity throughout the health care system.

This chapter provides detailed descriptions of several families of systems-engineering tools and related research that have demonstrated significant potential for addressing systemic quality and cost challenges in U.S. health care. Although the descriptions do not include all of the tools or all of the challenges to the health care system, they illustrate potential contributions at all four levels of the health care system in all six characteristics identified by IOM.

The first part of this chapter is focused on three major functional areas of application for mathematical tools, namely the design, analysis, and control of large, complex systems; discussions include examples of current or potential uses in health care delivery. In the second part of the chapter, mathematical tools are considered from the perspective of the four levels of the health care system; the tools most relevant to the challenges and opportunities at each level are highlighted. Many of the tools described in this chapter are applicable to more than one level but generally address different questions or issues at each level. It will become obvious to the reader that each level of the system has different data requirements and a different reliance on information/communications technology systems.

The systems tools discussed below have been shown to provide valuable assistance in understanding the operation and management of complex systems. Some of these have been used sparingly, but successfully, in various circumstances in health care. Others will require further development and adaptation for use in the health care environment. To assist the reader in classifying these tools, they are divided into three sections: (1) tools for systems design; (2) tools for systems analysis; and (3) tools for systems control. Design tools are primarily used for creating new health care delivery systems or processes rather than improving existing systems or processes. Analysis tools can facilitate an understanding of how complex systems operate, how well they meet their overall goals (e.g., safety, efficiency, reliability, customer satisfaction), and how their performance can be improved with respect to these sometimes complementary, sometimes competing, goals. Controlling a complex system requires a clear understanding of performance expectations and the operating parameters for meeting those expectations; systems control tools, therefore, measure parameters and adjust them to achieve desired performance levels.

The reader will recognize that these categories are somewhat arbitrary—analysis is important to design, systems control is necessary for the effective operation of a system, and so on. Thus, the division is not prescriptive but is helpful for organizing the discussion.


Creating a mathematical representation that describes a feature of a system or a subsystem, although necessary, is seldom sufficient. A mathematical representation can only provide quantitative predictions of performance if it is based on good data. Therefore, sound data about the performance of the system or subsystem are also necessary.

The nature of these data depends on the problem being addressed, of course, but one important generalization can be made. In systems as complex as the health care system, processes are stochastic, that is, individual differences create significant variability over time. For example, the amount of time a physician spends with an individual patient varies greatly depending on the patient's medical condition. To analyze the system, therefore, it is necessary to know both the mean and variance for relevant process times, such as the time involved in the delivery of each process, the fraction of patients who require each process, the number and required capabilities of individual providers, and the incidence of patients who do not keep appointments. Statistical distributions of times and usage for processes and providers also vary, not only among processes, but also among facilities. No norms have been established, however, so they must be determined. These issues are addressed in the discussion on queuing theory.

The variables to be measured depend on the particular analysis and, because data collection is often time consuming, determining which variables to measure is critical to the timely analysis of a system. However, understanding a complex system always entails time and effort to make measurements and observations.

The reader will note that the need for data is cited in many discussions of the applicability and uses of systems-engineering tools. Some of these needs can be met with a single sequence of measurements; others require massive databases. Good data are necessary to any systems analysis, but, because systems-engineering tools have not been routinely used in the health care delivery system, data for these analyses are often inadequate or missing altogether.


Systems-design tools are primarily used to create systems that meet the needs/desires of stakeholders (Table 3-1). In the health care system, stakeholders include patients seeking care, health care providers, organizations that must operate efficiently and provide a satisfying environment for caregivers and patients, and participants in the regulatory/financial environment that must provide mass access to good care. The system must meet the needs of all of these stakeholders.

TABLE 3-1. Systems-Design Tools.


Systems-Design Tools.

Concurrent Engineering

In the last 20 years, manufacturers in a variety of industries have used a procedure called concurrent engineering to design, engineer, and manufacture products that meet the needs and aspirations of customers, are defect free, and can be produced cost effectively. Concurrent engineering can be thought of as a disciplined approach to overcoming silos of function and responsibility, enabling different functional units to understand how their individual capabilities and efforts can be optimized as a system. Using concurrent engineering, a team of specialists from all affected areas (departments) in an organization is established; this team is then collectively responsible for the design of a product or process. The team considers “from the outset…all elements of the product life-cycle, from conception through disposal, including quality, cost, schedule, and user requirement” (Winner et al., 1988). The process begins with the initial concept and continues until a successful product or process is delivered to the customer.

Organizations that use the concurrent-engineering process have realized substantial benefits: fewer design changes are required once the production or process has been introduced; the time from design to full production is significantly shortened; the number of defects in the product is greatly reduced; and the process (or production) costs less. In addition to these direct, readily measurable benefits, the concurrent engineering process can also yield indirect, or “spill over,” benefits to an organization. These include improved cross-disciplinary/ cross-unit learning, improved teamwork, improved quantitative and qualitative characterizations of processes and systems, and improved understanding and appreciation of the overall system (i.e., how the decisions and actions of individual units affect the performance of the organization as a whole.) Concurrent engineering has been used mostly in the manufacturing arena, but the idea can be applied to the health care delivery system to develop a process for delivering care rather than manufacturing a product.

Concurrent engineering teams have different compositions for different organizations (or “processes”). A concurrent engineering team for an operating room (OR), for example, would include surgeons, nurses, laboratory technicians, and others, depending on the goal. For other units of a hospital (e.g., an intensive care unit [ICU], a neonatal care unit, the business office, etc.), teams would include the individuals and members of groups relevant to that unit. For the hospital as a whole, teams would be established at many levels. Each unit team would provide input to a more comprehensive team with members from all parts of the hospital, including the admissions staff, laboratory technicians, nurses, pharmacists, physicians, physical therapists, representatives of the OR, ICU, and so on. Each unit team would receive feedback from the comprehensive team, which would provide a basis for modifying the original conclusions and moving closer to optimizing overall performance. For the extended enterprise, the team would include members of other caregiver groups (e.g., pharmacists, rehabilitation technicians, home nurses, etc.).

Simply defined, concurrent engineering is an attempt to break down silos in an enterprise through effective teamwork. Many tools have been developed to assist in this process for manufacturing operations, but for our purposes we will highlight only one—quality functional deployment (QFD).

Quality Functional Deployment1

QFD can be very useful for designing processes and procedures that meet the level of service a customer/patient wants. Although QFD is not a mathematical construct, it provides a structure to help the concurrent engineering team identify (1) factors that determine the quality of performance and (2) actions that ensure the desired performance is achieved. The QFD procedure might be applicable to a team in an emergency room, the operation of an ambulatory clinic, or the operation of an entire hospital.

QFD is a procedure by which a stakeholder's wants/needs are spread throughout the elements of an organization to ensure that the final product/service satisfies those wants/ needs. The concept of QFD, which was introduced in Japan by Katsukichi Ishihari in 1969, was later developed for U.S. manufacturers by L.P. Sullivan (1986) and Hauser and Clausing (1988). Sullivan describes QFD as “a system to assure that customer needs drive the product design and production process” by translating them into the technical requirements of the product and then into a process for delivering a product/service that meets those requirements.

QFD has been used to design a wide range of products and processes, including a new automobile (Sullivan, 1988) and wave-solder processes used in manufacturing integrated circuits (Shina, 1991). The QFD procedure is also applicable to the development of a service function, such as the design of a library system, the provision of fast food, the creation of a traffic-control system, or the delivery of health care (Chaplin et al., 1999).

The QFD process begins with the identification of team members who represent all activities involved in the creation of the final product/process/service. Team members are chosen for their expertise and not just to represent their organizational units, and the team strives to make the best decisions for the organization as a whole.

The QFD team begins by listing stakeholders' wants. The number of stakeholders can vary greatly, depending on the unit being studied. Stakeholders in the health care system could include inpatients, outpatients, ambulatory patients, physicians, nurses, payers, health care system managers, even communities, or they could include only a few of these. Once the stakeholders have been identified, the team compiles a list of their needs. Depending on who the stakeholders are, these might include ready access to physicians, low costs, absence of paperwork, prompt payment of claims, high-quality treatment, rewarding careers, keeping of appointments, financial system stability, and so forth. Obviously, some of these needs may conflict with each other. For example, physicians and nurses may not have compatible career objectives, and community expectations may differ from payers' expectations. In the initial identification step, no attempt is made to resolve these conflicts.

In step one, the team prepares a list of “what” is wanted. In step two, they prepare a list of “how” these wants can be satisfied. The second step involves translating needs (or wants) into requirements that must be met to satisfy them. An example of “whats” and “hows” for a component of an ambulatory clinic is provided in Table 3-2.

TABLE 3-2. “Whats” and “Hows” for Stakeholders in an Element of an Ambulatory Care Clinic.


“Whats” and “Hows” for Stakeholders in an Element of an Ambulatory Care Clinic.

Of course, many more steps are involved in implementing QFD for a manufactured product, and similar steps are required for a QFD for the health care system. In complex systems in which several “hows” may be important to several “whats,” the material is presented in matrices. In this simplified example, the material is presented in tabular form.

Once the “hows” have been identified, they must be translated into detailed instructions. In the QFD procedure, the right-hand column in Table 3-2 becomes the left-hand column in Table 3-3. The right-hand column in Table 3-3 then becomes the “hows” for satisfying the stakeholder needs that were identified initially. Note that even in this simple example, many of the “hows” in Table 3-3 will require a third step, and some may require more.

TABLE 3-3. The “Whats” and “Hows” for Meeting System Objectives.


The “Whats” and “Hows” for Meeting System Objectives.

At this stage, some of the “whats” appear to conflict (e.g., the need for both more and less staff and facilities). In addition, the “hows” in both tables sometimes conflict. It is best to allow conflicts to arise naturally and not to suppress them when they first occur but to resolve them in subsequent steps. Teams have a tendency to jump to conclusions in the second step instead of pursuing a careful examination of trade-offs and conflicts. Redesigning processes with input from physicians and nurses, providing training in teamwork, and documenting improvements in quality of care and safety will have immediate benefits, even though further efforts will be needed before the design of major organizational changes (the next major step) can be undertaken.

Throughout the QFD process, the team must work within certain constraints established by the organization, such as cost objectives for the final service and the time available to implement the QFD procedure. For example, the team might conclude that achieving zero errors in the writing of prescriptions by all physicians, including those associated with independent practitioners' associations, is not possible in the time frame for the project. If this is the case, the QFD steps must be repeated with modifications, which may result in changing some previously agreed upon decisions. It is essential that all members of the QFD team continue to participate in this sometimes painful process. In the unusual event that the objectives cannot be accomplished within the constraints, the team must meet with senior management and determine if the constraints can be relaxed or if the processes must be changed. These decisions must be made in conjunction with management.

The QFD process can be both time consuming and difficult, and success requires the availability of the resources of the organization. Accomplishing a QFD analysis for a complicated project requires considering a vast array of details, and QFD team members may find it necessary to consult with many people in their organizational areas and ask for detailed studies and analyses at various stages. Thus, team members will need the support of many people to accomplish their tasks, especially the support and encouragement of upper management.

Nevertheless, experience in other industries has shown that if QFD is done properly, that is, if all relevant stakeholders are involved and objectives and constraints have been well defined, the direct and indirect benefits generally far outweigh the costs and risks of the QFD process. The committee is confident that QFD applications to the design of health care delivery processes, particularly at the careteam and organization levels, will yield significant, measurable performance gains in quality and efficiency. In addition, QFD will have significant indirect or spill-over benefits in health care delivery, where disciplinary and functional silos of responsibility are deeply entrenched. Indirect benefits include improvements in the quantitative and qualitative characterization of processes and systems, improvements in cross-disciplinary/cross-unit learning, improvements in teamwork, and a better understanding and appreciation of how the actions/decisions of individual units affect the performance of the system as a whole.

Human-Factors Research

In general, complexity is the enemy of very high levels of human-systems performance. In nuclear power and aviation, this lesson was learned at great cost. Simplifying the operation of a system can greatly increase productivity and reliability by making it easier for the humans in the system to operate effectively. Adding complexity to an already complex system rarely helps and often makes things worse. In health care, however, simplicity of operation may be severely limited because health care delivery, by its very nature, includes, creates, or exacerbates many forms of complexity. Therefore, in the health care arena, success will depend on monitoring, managing, taming, and coping with changing complexities (Woods, 2000).

Human-factors engineering and related areas, such as cognitive-systems engineering, computer-supported cooperative work, and resilience engineering, focus on integrating the human element into systems analysis, modeling, and design. In health care, for example, the human-technology system of interest may be organizing an intensive care area to support cognitive and cooperative demands in various anticipated situations, such as weaning a patient off a respirator. Human-factors engineering could also provide a workload analysis to determine if a new computer interface would create bottlenecks for users, especially in situations that differ from the “textbook” scenario.

At the patient level, the focus might be on the provider-patient relationship, such as making sure instructions are meaningful to the patient or encouraging the patient's active participation in care processes (Klein and Isaacson, 2003; Klein and Meininger, 2004). At the team level, human-systems analysis might be used to assess the effectiveness of cross-checks among care groups (e.g., Patterson et al., 2004a). At the organizational level, the human-systems issue might be ensuring that new software-intensive systems promote continuity of care (e.g., avoid fragmentation and complexity). At the broadest level, human-systems engineering may focus on how accident investigations can promote learning and system improvements (Cook et al., 1989).

Patterns of human-systems interactions that have been analyzed in studies in aviation, industrial-process control, and space operations also appear in many health care settings. A single health care issue (e.g., mistakes in administering medications) is likely to involve many human-performance issues, depending on the context (e.g., Internet pharmacies, patient self-managed treatment, administration of medication through computerized infusion devices, computer-based communication in a computerized physician order entry system). For example, a human-factors analysis of the effects of nurses being interrupted while attempting to administer medication could lead to changes in work procedures. Once the processes in human performance that play out in the health care setting are understood, the human-factors knowledge base can be used to guide the development and testing of ways to improve human performance on all four levels of the health care system (Box 3-1).

Box Icon

BOX 3-1

Improving Medical Instructions. Prescription medicines are generally accompanied by information sheets (e.g., take with food; do not use when certain other medications are being used; avoid alcohol; or store in an appropriate location). A study was undertaken (more...)

Modeling, supporting, and predicting human performance in health care, as in any complex setting, requires language appropriate to different aspects of human performance. Patterns in human judgment, for example, are described in concepts such as bounded rationality, knowledge calibration, heuristics, and oversimplification fallacies (Feltovich et al., 1997). Patterns in communication and cooperative work include the concepts of supervisory control, common ground, and open versus closed work spaces (Clark and Brennan, 1991; Patterson et al., 2004b). Concepts relevant to patterns in human/computer cooperation include mental models, data overload, and mode error (Norman, 1988, 1993).

Generic patterns in human-systems performance are apparent in many health care settings, and identifying them can greatly accelerate the development of changes to improve health care. This will require integrating a medical or health care frame of reference and a human-systems frame of reference based on cognitive sciences and research on cooperative work and organizational safety. Numerous partnerships between human-factors engineers and the medical profession have already led to improvements in patient safety (Bogner, 1994; Cook et al., 1989; Hendee, 1999; Howard et al., 1992, 1997; Johnson, 2002; Nemeth et al., 2004; Nyssen and De Keyser, 1998; Xiao and Mackenzie, 2004).

Thus, results already in the human-factors research base can provide a basis for rapid improvements in health care. A recent example is the improvement in handoffs and shift changes in health care based on a number of promising results in other industries that were directly applicable to this health care setting (Patterson et al., 2004b). Another example is in the cognitive processes involved in diagnosis. Faced with a difficult diagnosis, a provider may focus on a single point of view and exclude other possibilities (e.g., Gaba et al., 1987). Human-performance techniques (critical-incident studies and crisis simulation) have been used in other settings to study these kinds of situations and recommend ways that computer prompts and displays can be used to avoid this problem (Cook et al., 1989; Howard et al., 1992).

Another success story is the application of a human-systems perspective to improve medication-administration systems based on bar codes. The analysis of the problem involved identifying complexities and other side effects, such as workload bottlenecks and new error modes that arose when new computerized systems were introduced (e.g, Ash et al., 2004; Patterson et al., 2002). As advances in technology lead to improvements in telemedicine and the continuity of care, similar applications will no doubt be useful in the future. Trade-offs will involve economic constraints and the development of new medical capabilities (e.g., Xiao et al., 2000).

As these and other examples show, human-factors research can contribute to the development of highly reliable processes, systems, and organizations in health care that would advance the goals of safety, effectiveness, efficiency, and patient-centeredness. Simplification and standardization can increase reliability in many complex systems, including complex health care systems. However, simplification and standardization alone will not be enough to manage many areas of changing complexity in health care delivery. Human-factors research and applications will also be useful for monitoring, managing, taming, and coping with these dynamic complexities.

Tools for Failure Analysis

The purpose of failure-mode effects analysis (FMEA) is to identify the ways a given procedure can fail to provide desired performance. The analysis may include disparate elements, such as the late arrival of information and laboratory errors because of a lack of information about the interactions of certain drugs. In FMEA, a mathematical model is usually created and used in the analysis.

Prior to releasing a new product design, manufacturers analyze how the product might fail under a variety of conditions. FMEA is a methodical approach to analyzing potential problems, errors, and failures and evaluating the robustness of a product design (McDonough et al., 2004). FMEA can be used to evaluate systems, product designs, processes, and services and is essential to finalizing the design of a product or identifying how a part, subsystem, or system might fail, as well as the impact of failure on safety and effectiveness. Thus, FMEA provides an opportunity to design a potential failure mode out of a product or process.

In the health care delivery system, FMEA can be helpful for designing systems (e.g., the seamless transfer of information, the implementation of electronic health records [EHRs], potential failures in the regional response to a public health emergency, etc.) on the level of health care provider teams and on the organizational level.

In addition to identifying potential design flaws, FMEA has several other benefits:

  • identification of areas that require more testing or inspection to ensure high quality
  • identification of areas where redundancies are justified
  • prioritization of areas that require further design, testing, and analysis
  • identification of areas where education could minimize the misuse or inappropriate use of a product
  • foundation for reliability assessment and risk analysis
  • effective communication and decision making

FMEA can be done using a bottom-up or a top-down approach, or both. A bottom-up analysis (called a failure mode, effects, and criticality analysis, or FMECA) starts at the component level, is carried through the subsystem level, and finally is used at the system level. Failure of an individual component is important, but it is equally important to understand possible failure modes when components are assembled into subsystems or systems. Wherever possible, the probability of failures and their criticality are quantified. A FMECA is redone every time a design is changed or new information from testing or preliminary field use becomes available. FMECA is used at each step until the final design meets design criteria and satisfies quality and reliability goals.

A top-down approach, called fault-tree analysis (FTA), is used to identify consequences or potential root causes of a failure event. With FTA, an undesirable event is identified and then linked to more basic events by identifying possible causes and using logic gates. FTA is an essential tool in reliability engineering for problem prevention and problem solving.

Root-cause analysis (RCA) is a qualitative, retrospective approach that is widely used to analyze major industrial accidents. An RCA can reveal latent or systems failures that underlie adverse events or near misses. In 1997, the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) mandated that RCAs be used to investigate sentinel events in its accredited hospitals. Key steps in an RCA include: (1) the creation of an interdisciplinary team; (2) data collection; (3) data analysis to determine how and why an event occurred; and (4) the identification of administrative and systems problems that should be redesigned. Although RCAs are retrospective, they identify corrections of systems problems that can prevent future errors or near misses. One caveat about RCAs is that they may be tainted by “hindsight bias,” that is, after an accident, individuals tend to believe that the accident should have been considered highly likely, if not inevitable, by those who had observed the system prior to the accident (McDonough, et al., 2004).

In the past five years, the Veterans Health Administration (VHA) and JCAHO have taken several steps toward promoting the adaptation and application of FMEA, FMECA, FTA, and related tools of proactive hazard analysis and design to health care (McDonough, 2002) (see Box 3-2). In 2000, the VHA published a patient safety handbook that included instructions on FMEA and developed a health care failure-mode and effects analysis (HFMEA), “a systemic approach to identify and prevent product and process problems before they occur” (McDonough, 2002; Weeks and Bagian, 2000). In 2000, JCAHO encouraged the use of FMEA/HFMEA and related tools in its new standards that require all accredited hospitals to conduct at least one proactive risk assessment of a high-risk process every year. In 2002, JCAHO published a book specifically about FMEA for health care, which includes a step-by-step guide through the process and examples of FMEAs conducted by health care organizations (JCAHO, 2002).

Box Icon

BOX 3-2

Proactive Hazard Analysis. To address hazard and safety concerns in health care delivery, some have looked to other industries (e.g., aviation, manufacturing, food service, nuclear power plants, aircraft carriers) for models that can be applied to medical (more...)


Engineers use system analysis to help themselves and others understand how complex systems operate, how well systems meet overall goals and objectives, and how they can be improved. On one level, a systems analysis may focus on the performance of a single unit in a large system (e.g., the flow of patients through a facility or the allocation of resources in an emergency room). The results of these studies can be used to evaluate how changes in procedures might improve performance (e.g., reduce patient delays, improve safety, eliminate nonessential steps). At a higher level, a systems analysis may consider interactions among elements in a large system, such as a hospital, a regional medical enterprise, or even the national health care delivery system. Obviously, the larger the system, the more complex and the more difficult the analysis. But a careful analysis of systems at all levels can reveal interactions and opportunities for improvement that might otherwise be missed. Table 3-4 shows the levels for which various systems-analysis tools are most useful.

TABLE 3-4. Systems-Analysis Tools.


Systems-Analysis Tools.

Systems-analysis tools are generally used to analyze existing systems for improvement. Mathematical analyses of system operations include queuing theory, which could be used, for example, to understand the flow of patients through a system, the average time patients spend in the system, or bottlenecks in the system. Discrete-event simulation could be used for a more detailed examination of performance, such as an analysis of surges of patients on particular days or during emergencies or the scheduling of ambulances.

With enterprise-management tools, a system can be managed as a whole across the entire spectrum of elements, rather than at the level of individual patients. In spite of the fragmented nature of the health care system, interactions among all elements in the total chain can be clarified and managed. Supply-chain management tools, for example, are useful for determining the physical and informational resources necessary to the delivery of a product to a customer (e.g., reducing inventory, eliminating delays, reducing cost, etc.).

Economic and econometric models, based on historical data, are useful for bringing to light causal relationships among system variables. These tools include game theory, systems-dynamics modeling, data-envelopment analysis, and productivity modeling. Financial engineering, risk management, and market models, which are used to evaluate and manage risks, can be useful for examining financial risks to an organization, as well as for understanding the risks of certain actions for/by patients.

Knowledge discovery in databases is a method that can be used to examine large databases (e.g., a database of patient reactions to groups of drugs). It might be used, for example, to examine the history of particular drugs or treatments or to examine procedures for patients with particular life styles or health histories. With knowledge-discovery tools, one might search historical records for an effective procedure or identify outlier events, such as a small number of patients who share a condition and experience unexpected side effects from a medication.

Because system analyses must describe an existing system (or one that reasonably approximates an existing system), it is essential that data be available (or obtainable) for that system. The nature of the data depends on the problem being addressed. Analyzing a system to improve the efficiency of a surgical operation requires very different data from an analysis to assess the effectiveness of a disease-management program.

Modeling and Simulation

Models and simulations are important tools for analyzing systems. Models are mathematical constructs that describe the performance of subsystems. Interactions among subsystems in a larger system, combined with the constraints within which the system operates, influence the performance of the total system and represent the overall system model. Using these models and simulations, it becomes possible to analyze the expected performance of a system if systemic changes are made. For example, would a change in inventory location and levels improve or reduce the effectiveness of the nursing staff? Would a change in scheduling of the emergency room increase or decrease the number of patients that must be diverted and at what cost?

Models have been developed for a variety of health care applications that do not directly involve physical facilities. For instance, multiple models have been developed to examine the effectiveness of screening and treatment protocols for many diseases, including colorectal cancer, lung cancer, tuberculosis, and HIV (Brandeau, 2004; Brewer et al., 2001; Eddy et al., 1987; Fone et al. 2003; Mahadevia et al., 2003; Neilson and Whynes, 1995; Ness et al., 2000; Phillips et al., 2001; Schaefer et al., 2004; Walensky et al., 2002). In addition, many models have implications for health care policy; for example, models might suggest that efforts to reduce tobacco use in adults would be most beneficial in the short term, whereas blocking the introduction of tobacco to young people is more likely to have long-term benefits (Levy et al., 2000; Teng et al., 2001). Hospitals and clinics have used simulations to improve staffing and scheduling (Dittus et al., 1996; Hashimoto and Bell, 1996), and models have been used to help clinicians distinguish injuries caused by falls down stairs from those resulting from child abuse (Bertocci et al., 2001). Virtual-reality patients have been used for training in psychiatry, the social sciences, surgery, and obstetrics (Letterie, 2002).

Queuing Theory

Queuing theory deals with problems that involve waiting (queuing), lines that form because resources are limited. The purpose of queuing theory is to balance customer service (i.e., shorter waiting times) and resource limitations (i.e., the number of servers). Queuing models have long been used in many industries, including banking, computers, and public transportation. In health care, they can be used, for example, to manage the flow of unscheduled patient arrivals in emergency departments, ORs, ICUs, blood laboratories, or x-ray departments. Queuing models can be used to address the following questions:

  • How long will the average patient have to wait?
  • How long will it take, on average, to complete a visit?
  • What is the likelihood that a patient will have to wait for more than 20 minutes?
  • How long are providers occupied with an average patient?
  • How many personnel would be necessary for all patients to be seen within 10 minutes?
  • Would flow be improved if certain patients were triaged differently?
  • What resources would be necessary to improving performance to a given level or standard?
  • What is the likelihood that a hospital will have to divert patients to another hospital?

Queuing is a descriptive modeling tool that “describes” steady-state functioning of the flow through systems. Although health care is rarely in a steady state, from a mathematical point of view, queuing models provide useful approximations that are surprisingly accurate.

Queuing models are generally based on three variables that define the system: arrival rate; service time; and the number of servers. The arrival rate, λ, describes the frequency of the arrival of patients. The most common type of unscheduled arrival pattern can be described with the Poisson distribution (Huang, 1995). Service time, T, is the average time spent serving a particular type of patient at a given station. In health care, the service time is most often random and is most commonly described by an exponential probability distribution. Number of servers, n, is the number of stations doing similar tasks for all patients who approach those stations.

For a station with a single server, average arrival rate of patients (λ) multiplied by the average time patients spend with a given server (T) must be less than or equal to unity (i.e., λT1). Otherwise the queue would continue to build up without relief. If n servers are present, λTn. In the absence of variability, no queues would build up and the flow through the station would be regular. In the presence of variability, which always exists, queues will build up. The closer λT is to 1, the longer the queues for that station. The bottleneck station in the network can be identified by locating the station with the largest λT.

For a single station with the probability distributions described above, the response time for the station (the average time for a patient to pass through the station) is given by

Response Time = T/ (1 – λT).

As λT approaches unity, the response time becomes very long.

To manage flow well, service areas must measure critical indices derived from the model; these may include, but are not limited to, utilization (percentage of time servers are busy, waiting time, length of waiting lines), probability of diversion (rejection), abandonment rates, bottlenecks, and door-to-door time (time of actual arrival to time of actual departure).

It is critical that the full variability of the metrics be measured and displayed. Often the data mean or median is calculated and graphed, but this does not give a true picture of variability. If the measures were constant and could be predicted by the mean, the problem of managing flow would not exist!

Queuing theory can provide analytical expressions for a single station, but analytical expressions for a network of stations require computer programs that can approximate the performance of a network. Once the network description has been entered, the performance of the network can usually be analyzed quickly.

The law that applies to systems with queues, Little's law, enables one to determine either the number of patients being served in a facility, for example a clinic or a hospital, or the average time a patient spends in the facility. If L is the average number of entities (patients) in a system that contains a variety of locations at which procedures are performed, that is, servers, Little's law states that

L = λ W

where λ is the average arrival rate into the system and W is the average time each patient spends in the system (the sum of the average time patients spend waiting plus the average time they spend with caregivers). If either L or W is known, the other can be calculated easily.

One problem in health care today is that the number of facilities that have unscheduled patient flows is increasing, while the number of people available to treat them is decreasing. This situation requires new management approaches, methods of reducing waiting times and keeping emergency departments from turning away patients, such as building in segmentation, matching capacity to demand using queuing theory, and creating surge capacity and backup plans for exigencies. Because of variabilities in patient demand, fixed bed and staffing levels are almost always either too high or too low, which has ramifications for both the quality and cost of care. Queuing models allow for natural variabilities, which leads to greater predictability and control and, ultimately, more timely and safer patient care. Queuing theory has been used (although infrequently) to analyze a variety of clinical settings, including emergency departments, primary care practices, operating rooms, nursing homes, and radiology departments (Gorunescu et al., 2002; Huang, 1995; Lucas et al., 2001; Murray and Berwick, 2003; Reinus et al., 2000; Siddharthan et al., 1996).

Discrete-Event Simulation

In discrete-event simulation, the dependent variables are “actors” in, or are developed by, the system. In a health care system, these can include patients, caregivers, administrators, inventory, capital equipment, and others. The independent variable is time. In this type of simulation, it is expected that events take place at discrete points in time (e.g., the arrival of two patients at Station C, one at time t1, the second at a later time, t2).

A key aspect of a discrete-event simulation is the system-state description, which includes values for all of the variables in the system. If any variable changes, it changes the system state. In a simulation, the dynamic behavior of the system can be observed as entities (e.g., patients, staff, inventory) move through the nodes and activities (e.g., registration desk, nurse's preliminary examination, physician's examination, laboratory tests, etc.) identified in the model. The rules governing the motion of entities and the paths they follow are peculiar to the specific model and are specified by the modeler. Describing systems that involve human interactions requires the use of mathematics based on probability theory and statistics, which can describe the variabilities and discreteness of events. Computers are necessary to analyze the many states in complex systems.

In most cases, the initial system state must first be specified, that is, values must be supplied for the variables and their variances based on observations of an existing system or a system sufficiently similar. The model can then be tested to see if it describes the performance of the existing system. If it does not, it must be adjusted, perhaps by including different variables or by treating interactions among the variables in different ways. Once the model has been validated, it can be used to explore the consequences of different actions.

If each variable had only one possible value (e.g., the number of nurses available in the prenatal clinic at 10:05 a.m.), a single calculation would be sufficient to describe a system. But most system variables have a distribution of values, such as the differences in the number of nurses needed throughout the day in Surgical Ward 2 of the hospital. Thus, many computer runs must be made to explore combinations of values of the variables. Tools are readily available for determining how various computer outputs should be grouped and interpreted.

Discrete-event simulation has been used to analyze a number of health care settings, such as operating rooms, emergency rooms, and prenatal-care wards (Klein et al., 1993), and a variety of workforce planning problems. The overall objective has been to improve or optimize the safety, efficiency, and or effectiveness of processes and systems. Kutzler and Sevcovic (1980) developed a simulation model of a nurse-midwifery practice. Duraiswamy et al. (1981) simulated a 20-bed medical ICU that included patient census, patient acuity, and required staffing on a daily basis for one year. A simulation of obstetric anesthesia developed by Reisman et al. (1977) was used to determine the optimal configuration of an anesthesia team. Magazine (1977) describes a patient transportation service problem in a hospital; queuing analysis and simulation were used to determine the number of transporters necessary to ensure availability 95 percent of the time. Bonder (see paper in this volume), describes a simulation for a very large-scale, level-four analysis of a regional health care system in the Puget Sound area of Washington. Pritsker (1998) describes the development and use of a large-scale simulation model to improve the allocation policy for liver transplants (see Box 3-3).

Box Icon

BOX 3-3

Allocation Policy for Organ Transplantations. The scarcity of livers for transplantation makes allocation extremely difficult. Approximately 4,000 donated livers were available in the United States in 1996 and 1997. In mid-1998, about 10,000 individuals (more...)

Dittus et al. (1996) developed a simulation model of an academic county hospital to determine if alternative call schedules would address the problem of provider fatigue among the house staff. As a result, a new call schedule was implemented, and the model's predictions of work and sleep were validated against provider behavior under the new schedule. This prospective, empirical, hypothesis-driven validation demonstrated that a well constructed model can accurately characterize system behavior and predict future performance, even in a complex environment, such as the life of a medical resident in a busy county hospital.

These analyses have demonstrated that performance of complex units can be improved in terms of responsiveness and the allocation of resources. Discrete-event simulation can be used to simulate dynamic systems—systems in transition, new systems being developed, systems that have time irregularities, and others.

Enterprise-Management Tools

Enterprise-management tools are helpful for management on a system level and across component boundaries. For example, enterprise management has been used successfully for mass customization—a process by which every product is tailored to meet the specific needs and wants of an individual customer. In the portfolio of products offered by a manufacturer, many products have common components and common functions. For example, new cars may have a wide range of options, but the frame and many components of all new cars are the same. A mass customization production system offers customers a great deal of flexibility in specifying the final product.

An effective, efficient health care delivery system demands the same flexibility as “mass customization” of a manufactured product. The key to meeting individual customer or patient needs without sacrificing operating efficiency is maintaining a high level of flexibility (Champion et al., 1997). The mathematical tools described below can help health care managers maintain a system that balances the need for resources against the demand for those resources. In the health care setting, enterprise-management tools can be useful on the level of care teams, organizations, and the environment.

Early in the twentieth century, industrial pioneers could not have imagined that complex systems that include networks of suppliers, manufacturers, distributors, retailers, and service providers would be widespread in the manufacturing industry. These complex supply chains, which bring products made from raw materials to consumers around the globe, are some of the most efficient and complex socioeconomic systems in the world. Companies such as Dell Computer, Westin Hotels, Toyota, American Express, Procter & Gamble, and others have all benefited enormously from mass customization (Chandler, 1990; Gertz and Baptista, 1995; Reichheld, 1996).

Health care delivery, like other business enterprises, is a complex socioeconomic system in which multiple agents, often with very different agendas, interact. As in complex business enterprises, decisions taken by one party can significantly affect the costs incurred and the quality of service provided by other parties in the system. In addition, different entities in the system, so-called agents, often have different, sometimes conflicting, objectives. The history of enterprise-management systems has shown that a thorough understanding of how different agents in the system interact can yield significant benefits for the entire system.

Supply-Chain Management

Analyzing and optimizing systems with a great many participants and components is particularly difficult because no one can understand the entire system in detail. Supply-chain management is an engineering tool that recognizes and characterizes interactions among subsystems (see Ryan, in this volume). Supply-chain management tools can also be used to explore the consequences (expected and unexpected, and likelihood thereof) of reimbursement decisions, which may not become evident for years.

In an environment in which demands vary unpredictably, supply-chain management can help match resources with demands. In the health care delivery system, resources include human capital (e.g., nurses, therapists), physical capital (e.g., intensive care beds, ambulances, sponges), and intellectual capital (e.g., a patient medical record or an evidence-based medicine protocol). The stochastic nature of the demand for services and the inconsistent availability and effectiveness of resources always generate a great deal of variability in the health care system. Policy decisions in one part of the system, such as a decision by an insurer not to fund a preventive procedure, can have unexpected consequences for other parts of the system, which may only become apparent after a period of years.

Capacity and variability are at the heart of how components of supply chains operate (see Uzoy, in this volume). Whether we are considering two neighboring elements in a system or blocks of elements that interact with other elements, the input-output relationships are often nonlinear and must be treated that way in any mathematical representation of how variables interact in the presence of constraints on the system.

The coordination of geographically distributed operations owned by a single firm has been addressed for several decades by increasingly sophisticated optimization models, such as linear integer programs that optimize performance within a large number of constraints. Nonlinear programming has progressed to the point that models of significant scale and complexity can be developed. The primary disadvantage of these techniques is that, although they are relatively straightforward (at least on a conceptual level) when the entire system is controlled by a single entity with a single well defined objective, they present great difficulties when independent agents with different objectives and constraints interact, as can occur, for example, when a supplier has more than one customer for a particular product. Advanced modeling techniques are just now being applied to these problems, but a great deal more research in this area will be necessary.

Examples of how supply-chain management models work follow. In the late 1980s, American Airlines used an integer linear programming model to assign crews for more than 2,300 flights per day to more than 150 different cities using 500 jet aircraft. The mathematical model was sufficiently detailed that one could examine the effects on the system of allocating resources in different ways. As a result of the modeling effort, the airline made decisions regarding fleet planning, crew-base planning, and schedule development that resulted in a 0.5 percent reduction in operating cost and a $75 million increase in revenue in 1988 (Abara, 1989). Vanderbilt University Medical Center used a supply-chain management process to redesign its perioperative services. This project reduced costs by $2.3 million and improved the quality of care by ensuring that appropriate clinical supplies were delivered during the perioperative period (Feistritzer and Keck, 2000). The Deaconess Hospital of Evansville, Indiana, used a supply-chain management tool to improve its drug distribution in the operating room; savings totaled $115,000 in the first year (Thomas et al., 2000). It has been estimated that the health care industry could save more than $11 billion a year with supply-chain management techniques (McKesson, 2002).

Tools that can be used to examine the system at a higher level of abstraction are just evolving (Pierskalla, 2004; Uszoy, in this volume). These tools will support the modeling of large, complex systems involving interactions among many, possibly thousands, of agents with specific objectives and constraints. However, developers of modeling techniques at this level have encountered a number of difficulties. First, because of the sheer size and complexity of the systems, the efforts involved in developing and documenting models are very time consuming. In addition, data that provide realistic estimates of critical parameters to populate these models are often hard to obtain, if they are available at all. For example, considering the number of health care providers a patient deals with over a lifetime, data will have to be collected systematically over many years. Most existing tools for such modeling have significant drawbacks that have only recently begun to be understood and addressed.

Economic and Econometric Models

The economic and econometric models described below primarily use statistical techniques to elucidate causal relationships among system variables; these models are generally based on historical data and can have different levels of predictive power. Models based exclusively on time series, in which the only independent variable considered is time, essentially assume that past history is representative of the future. Models such as data-envelopment analysis that try to develop causal relationships between system-performance measures and independent variables other than time are often more enlightening but require much more detailed data.

In the context of health care delivery, these models might be used to determine the needs of certain segments of a population based on their economic situation, for example, or the relationship between different types of preventive treatments for a disease and the progression of the disease over patients' lifetimes. Extensive studies of this kind are already widely used in various aspects of health care, such as the approval of new drugs and diagnostic tests by the Food and Drug Administration (Ness et al., 2003; O'Neill and Dexter, 2004; Ozcan et al., 2004). More than a decade ago, the Commonwealth of Australia passed into law guidelines requiring an economic assessment of new drug applications for its national formulary (Freund et al., 1992).

Game Theory and Contracts. Game theory examines how agents with different agendas behave when they interact. The game-theory framework for addressing these interactions has recently been used in a number of simple models of supply-chain management. Extensive research has also been done on different types of contracts between parties that can provide incentives for actors to behave in ways that benefit the overall system (Tsay and Nahmias, 1998).

A significant difficulty with these models is that their solutions generally pertain to the long-run steady state of the system. Not much has been done by way of studying how well these techniques work in transient regimes, for example, when the constituent members of a patient's care team change over time. Many of these models also assume perfect information sharing, which is unlikely in practice, and researchers are beginning to examine the effects of different information-sharing protocols, as can occur among care providers in a distributed network of providers or when patients must undergo emergency treatment by someone other than their primary caregivers. In short, a great deal of research remains to be done in this area.

Systems-Dynamics Models. Based on pioneering work by Forrester (1961), systems-dynamics models define specific input-output relationships for system components and use them to simulate the operation of a system, basically using techniques derived from the numerical solution of systems of differential equations. These techniques have been used to solve business problems for many years (Sterman, 2000) and can be used to model large, complex systems. However, they require accurate definitions of input-output relationships because feedback loops with gain and loss coefficients are used to capture system behavior. If these parameters are not estimated correctly, model results can be substantially wrong.

Nevertheless, systems-dynamics models can be powerful tools for gaining a high-level understanding of the behavior of large systems, as has been demonstrated by their prediction of the “bullwhip effect” in supply chains, whereby the variability of orders placed by different parties is amplified at each stage of the supply chain, ultimately causing huge swings for the manufacturer (who is “whipped” about). For example, because of variability in orders for replenishing stock (e.g., medicines in pharmacies), manufacturers must make assumptions regarding future needs, which can lead to either undersupply or oversupply that can have serious economic consequences for the manufacturer. WalMart, a mass retailer with a large network of stores, has minimized the bullwhip effect in its supply chain by sending point-of-sales data directly to manufacturers. System-dynamics modeling has been used to analyze emergency care systems and other aspects of health care delivery (Lattimer et al., 2004).

Measuring and Monitoring Productivity. Despite an ambitious, well defined quality agenda, there has been little direct interaction between the engineering community and the health care community in the development of productivity measures and monitoring systems. Until recently, the measurement of productivity in the health care sector has been seriously hampered by a limited understanding of the relationships between inputs, outputs, and outcomes for different patient populations. For the most part, health care providers are trained to focus on the unique characteristics and needs of individual patients; they have very little training or perspective on the characteristics (and needs) of patient populations.

The advance of evidence-based medicine and disease management, which focus on patient populations, is encouraging the development of more uniform/standardized output- and outcome-based performance measures based on the response of defined patient populations to best-practice, “standardized” interventions. For example, for patient populations with x condition (and y degree of severity), there is a best-practice treatment (i.e., the most evidence-based, safest, timeliest, most patient centered treatment) that yields the best outcome (i.e., the most positive change in health state) for the lowest cost (i.e., the most efficient use of inputs and infrastructure).

Modeling and simulation of care delivery processes and systems can help care provider teams and organizations better understand, test, and optimize the processes/systems that support best-practice use of inputs (e.g., people, resources, facilities, equipment, information on patient conditions, evidence-based medicine) to achieve “best” outputs that contribute to best patient outcomes. Over time, the progress of automation and the widespread implementation of information/communications systems in health care delivery and advances in the fields of genomic/proteomics should enable the capture of more detailed input, patient population, process, and outcomes data. This will lead to a more sophisticated understanding and better measurements of the quality and productivity performance of health care delivery at all levels of the system and facilitate the application of more sophisticated analytical and predictive systems tools.

At the present time, the health care system, like many other service industries, does not have good measures of productivity. Although the efficiency of a given unit can often be determined, measuring the efficiency and productivity of a system is much more difficult. With the help of a number of the systems tools described above, performance metrics can be established and the impact of various changes on those metrics can be estimated. Additional research on the measurement of productivity would be of great benefit to the health care community.

Financial-Engineering Tools for Risk Management

The effective operation of any system requires management of risks. In health care, risk management is critical because of the substantial personal risks for individual patients and the financial and reputational risks for providers, insurers, and purchasers of health care. Risk-management tools can substantially improve the delivery of health care by improving the financing of operations and the allocation of resources, reducing individual exposures to extreme risks, and creating incentives for improving processes. Tools to assist in decision making in the presence of risks are as useful to individual patients and care teams as they are to organizations and the regulatory agencies and other actors in the larger environment.

In this section, risks are identified and general processes of risk management and financial engineering are described. This is followed by a description of financial-engineering tools that could have significant benefits for the health care delivery system at the organizational and environmental levels of the system.

In this report, risk is defined broadly as the chance of injury, damage, or loss, and the focus is on reducing variations that lead to extreme risks. The goal of risk management is to reduce risk to the patient, caregiver, or organization by ensuring predictability in the use of resources within the constraints of a fixed expenditure of funds.

Effective risk management requires that the kinds of risk be differentiated. In health care, individual risks, or patient risks, are potential compromises to the health of an individual caused by some action of the system. Other kinds of risk involve potential losses at higher levels of the health care system. Care team members face occupational risks, such as exposure to disease, physical duties, and workplace hazards (e.g., exposure to toxic substances, radiation, and equipment malfunctions). Health care organizations also face a variety of risks (McDonough et al., 2004):

  • operational risk, which includes all risks associated with the delivery of services
  • competitor risk, such as the potential of losing market share to competitors
  • financial risk, such as the risk of nonpayment or reduced payment for services or the risk of significant financial liability
  • environmental risk, such as the risk of damage by forces external to the organization
  • model risk, that is, the risk that the models used for evaluating other types of risk are not accurate

Risks at the political-economic environmental level are incurred not only by individual organizations, but can also arise from interactions among organizations, the lack of adaptability of organizations, and the misalignment of individual and societal objectives.

Risk management in the health care system involves the analysis and assessment of risks, as well as the development of strategies to reduce risk, protect against losses, and ensure that risks transferred from one agent to another are compensated fairly. Risk management generally answers the following questions:

  • What can create a loss?
  • How often, how severe, and when can losses occur?
  • Which losses are manageable?
  • How can risks be transferred elsewhere?
  • What is fair compensation for assuming or releasing risks?
  • How does risk affect the overall strategy of the organization?

In a general corporate context, risk management can lead to more productive employees; less volatility in revenue and cost changes; better coordination among organizational units, as well as with suppliers and customers; more effective purchases and sales of risk-based products; and the development of organizational structures that achieve risk-management goals.

One of the key tools for risk management is financial engineering, the application of mathematical and computational tools to financial issues (see Mulvey, in this volume). Financial engineering includes modeling and predicting markets, evaluating options and other financial derivatives, allocating assets and liabilities, trading in financial markets, determining policies for efficient market development, and providing quantitative and information services for financial markets. The overall goal of most financial engineering is to increase return on resources invested (a measure of performance or effectiveness) while reducing risk.

An increase in return on investment and a simultaneous reduction in risk ultimately increases efficiency. The objective is to increase the output for a given amount of input, as well as to control the reliability, predictability, and consistency of the process that creates these outputs. Viewed as a mechanism for improving efficiency, financial engineering is a product of the traditional fields of industrial engineering and operations research, for which the overall goal is to produce a system that yields the best possible product or process in terms of quality, customer/patient value, low cost, and timely response.

The following sections describe three major areas of financial engineering that are most relevant to the risk analysis/management needs of health care organizations and environmental level actors.

Predicting and Assessing Uncertain Outcomes: Stochastic Analysis and Value-at-Risk

To manage risk, it must first be quantified, analyzed, predicted, and forecast. Some analyses assume existing conditions and rely on statistical descriptors of the frequency and extent of previous outcomes. Statistical analysis focuses on what has happened in the past and how it relates to an entire population. Stochastic analysis, the main type of analysis in financial engineering, infers current or future behavior for systems with random outcomes that follow assumed, observed, or approximated distributions. Stochastic analysis is also the tool used in predicting and quantifying risk.

The financial-engineering concept of value-at-risk (VaR) is a widely used tool of stochastic analysis. VaR is used to measure the worst expected loss over a given time interval under normal market conditions at a given confidence level (Jorion, 1997). For example, a bank with a billion-dollar portfolio might state that its daily VaR is $10 million at the 99 percent confidence level. This means there is only one chance in 100, under normal market conditions, that the bank will experience a loss of more than $10 million in a day. The VaR summarizes the bank's exposure to market risk and the probability of an adverse move. If managers and shareholders are uncomfortable with the level of risk, the process used to calculate the VaR can be used to decide where risk should be reduced.

VaR has become a standard measure for the banking industry and a required measure in regulatory compliance for capital requirements. Because VaR captures potential loss and likelihood, it has also become a common measure for firms outside the banking industry and may become a general standard in risk management.

A basic form of VaR estimation is to assume a probability distribution for the values of risky assets. For a bank that holds stocks, for example, the distribution might be a form of multivariate-normal or log-normal distribution. A typical VaR analysis would then form an estimate of the parameters of this distribution and the mean, variances, and covariances of the stock returns over a given period of time. Generally, these estimates are based on statistical analyses of stock returns (perhaps with corrections for current conditions). Once the parameters have been determined, the overall value distribution of the portfolio can be found. VaR is the difference between the first percentile of the cumulative distribution and the current value of the portfolio.

VaR also has useful characteristics for health care analyses. Besides being used for financial management in health care, VaR can also be used to assess potential losses for many groups of insured patients in a given period. Cash requirements for health care organizations can be estimated and the value of assuming and transferring risks can be assessed.

VaR is just one financial-engineering tool that may be of benefit to health care. Other tools that might be relevant include credit assessments of individuals and organizations, pricing of services, pricing of risks that exceed given levels (i.e., derivative pricing), and valuations of combined risks for engaging in multiple markets (e.g., worker compensation, health insurance, and life-risk insurance).

Optimization Tools for Individual Decision Making

Financial engineering tools are not only descriptive (e.g., stochastic and statistical analyses), but also prescriptive. Thus, they can provide a basis for making decisions to yield best results. Decision making under uncertainty (also known as stochastic programming), an essential aspect of operations research, is based on optimization tools that can compute best values for variables in mathematical representations of the decision process and measure outcomes (objectives). Optimization tools in this context rely on the stochastic analysis of the effects of uncertainty and the relationship between those effects and the decisions represented in the mathematical model.

Optimization under uncertainty has had a wide range of applications in the financial industry, as well as in manufacturing, several other major services industries (e.g., telecommunication, transportation, and energy), and many aspects of the health care industry (e.g., radiation treatment, cancer diagnosis, and combined drug therapies). In the financial industry, these tools are often used to optimize portfolios by assigning weight to each characteristic of each asset to predict a certain return with the least risk. (In 1990, Markowitz, Miller, and Sharpe were awarded the Nobel Memorial Prize in economics for their work on portfolio optimization theory). Recent portfolio optimizations include asset-liability management (i.e., the coordination of assets and liabilities over time).

Portfolio optimization and asset-liability management have direct implications for health care. For example, a health care insurance organization can use these tools to determine optimal allocations of risks in terms of geographic regions, patient demographics, and investment capital to meet the needs of insured individuals. An insurance organization can also assess the value of expanding insurance to cover areas, such as life, and the incremental reductions in capital needs as a result of risk pooling.

Besides allocating liabilities and assets, a health care organization might use financial-engineering optimization tools to price services to determine the most efficient distribution of resources. Determining optimal prices, often called revenue management or yield management, is a common practice in the airline industry. In general, the goal is to determine allocations of scarce resources (e.g., passenger seats) that can be made available at different prices. In health care, revenue management might be used, for example, to schedule elective procedures, bundle services associated with different diagnoses or the management of chronic conditions, and determine priorities for scarce resources, such as diagnostic equipment, operating rooms, and critical care facilities. Asset-liability management and revenue management are only a sampling of optimization-based tools in financial engineering and risk management. Besides the direct implications for health care suggested above, they may also have less obvious applications that will require additional research.

Market Models

In general, the issues of moral hazard and adverse selection (i.e., the incentive to cheat or conceal information in the absence of penalties for doing so) present greater difficulties in the health care context than in financial services, and addressing these issues will require modeling of an organization's decisions as well as of individual patients' decisions. In these cases, the models must go beyond individual decision making to distributed decision making.

Distributed Decision Making and Agency Theory

The overall performance of the health care system is determined by many decision makers—patients, providers, insurers, and payers. Individual optimization tools may determine the best outcome for a single agent in the system, but the collection of actions by these individuals may not lead to the best outcome for the performance of the entire system. Analyses of the overall system may include models of the market and the effect of each agent's actions on the efficiency of the results.

Market models that combine optimal decisions for individual agents combine the results of individual decisions and generally iterate among decision makers to find an overall system equilibrium in which no agent has an incentive to change his or her decision. The result is a model of reality that can be used to determine the effects of different market structures, regulations, and external incentives.

One economic sector that has benefited from distributed decision-making models is the energy industry, which changed from a regulated monopoly (or collection of regulated monopolies) to a variety of forms of open competition in commodities, such as natural gas and electricity. Distributed-decision models have been used to assess the value of market mechanisms relative to a central system to determine resource allocations that minimize overall societal cost. If an individual agent's behavior does not contribute to the socially optimal outcome, the difference is often called the agency cost. Such costs arise, for example, in decisions by corporate executives who represent shareholders to the detriment of bondholders. Agency theory quantifies these costs and analyzes the value of alternative contracts and procedures to reduce them.

Distributed-decision models could be used to analyze the health care system, which has a multiplicity of independent agents. The impact of varying patient costs, insurance coverage, and care convenience, for example, can be incorporated into an overall system model to determine optimal decision processes for all agents. The results of these analyses could be used to determine tax policies, Medicare and Medicaid payments, and insurance regulations and their impact on the overall efficiency of the health care system. Because of the enormous size of the health care system and the wide variety of interests and objectives of its participants, analyses would be challenging. This is another area where research will be necessary.

Knowledge Discovery in Databases

In addition to modeling the system, the large amounts of data that are collected about products, customers, and markets and entered into databases can be accessed to provide information about the location of sales, the temporal variability of sales, returned products, and other detailed information. Large databases also often contain embedded knowledge that goes beyond the obvious. Customer surveys often ask questions that seem unrelated to the purchase of a particular item, and that information may be a rich source of insight. If, for example, a company determines that a large fraction of a group of customers regularly purchases both its product A and another manufacturer's product B, the company can use this knowledge to reach selected potential customers or sharpen the focus of an advertising campaign. If a database reveals customer loyalty to a particular manufacturer, that loyalty can become a key marketing objective.

Large databases can provide a basis for addressing system-wide issues in health care. Information in databases can reveal relationships that are not obvious from an examination of a smaller number of instances. The detailed medical history of a large group of patients can reveal interactions among drugs or the epidemiological role of certain drugs in specific diseases. Mitchell (1997) has reported that data mining successfully predicted that women who exhibited a particular group of symptoms had a high risk of requiring emergency C-sections. McCarthy (1997) describes how Merck-Medco Managed Care has used data mining to help identify less expensive drug treatments that are equally effective for specific patients. By examining a large Medicaid database, Ray et al. (2004) found an unrecognized risk of sudden cardiac arrest from a commonly prescribed antibiotic used in combination with other drugs or substances that inhibit the breakdown of the antibiotic. In these examples, information in a database can replace anecdotal observations with a large number of examples. Databases can also be explored to forecast health care costs, plan system management, and set prices for services.

Information from databases created for different purposes (e.g., financial reporting and patient history) must usually be modified before it can be analyzed. Pertinent information from different databases must be grouped together, vacant fields removed or average values inserted, duplicated files eliminated to ensure statistical integrity, and accuracy of the data confirmed. These steps can be both time consuming and difficult.

Data Mining

Four different types of information can be extracted from databases using computer techniques:

  • classifications (e.g., characteristics that suggest a high probability that a patient will have a stroke before age 55)
  • estimations (e.g., if the rate of change in potassium exceeds some limit, a patient may be at increased risk for an arrhythmia)
  • variability (e.g., variations in practitioner-to-practitioner procedures)
  • predictions (e.g., the likely number of deaths from the flu virus in the winter of 2005)

Once a set of independent variables is identified, the analysis can then continue to determine the relationship to a dependent variable:

  • Is a patient with symptoms A and B likely to develop symptom C?
  • What is the efficacy of drug D for the treatment of symptom E?
  • Is there evidence that patients taking drugs G and H are more or less likely to develop a particular side effect?
  • Is the tendency more pronounced for patients over 60?

All of the examples above can be called supervised learning strategies. Questions are posed, and the computer searches the database to determine whether relationships exist or can be quantified for specific variables.

Another approach is to instruct the computer to search for clusters of attributes that show either positive or negative correlations, so-called unsupervised clustering. This technique may be useful for identifying atypical instances. For example, data points representing outliers may be particularly useful for identifying undesirable reactions to certain combinations of drug treatment that could reveal relatively improbable, but very troublesome, events. Clearly, there is a need for better data mining of multiple drug interactions, tracing of adverse events, and updating of analyses as new drugs come on the market.

Predictive Modeling

If a causal relationship has been established between sets of variables by data mining, and if the statistical significance of these relationships is high, a predictive model can be constructed to predict the consequence of various actions. For example, a model might state that a patient with symptoms X, Y, and Z who is treated with drugs A, B, and C will have a high probability of specific reactions. In principle, the relative importance of each variable must be known for the model to be effective. In practice, however, if all of the principal variables are included in the model, it is often a fair assumption that they have equal influence on the final output. Obviously, a large amount of data is necessary to find enough patients with the three identified symptoms and the three specific drugs to ensure the statistical significance of the results. The variability in inputs and outputs to the model and the number and independence of the observations are critical to determining the statistical significance of predictions.

Neural Networks

In the absence of large comprehensive databases, neural networks have been used to achieve the same purpose as predictive modeling. Once a relationship has been observed among the three symptoms X, Y, and Z and the drugs A, B, and C, the problem is to determine the strength of the interactions. In contrast to the assumption of equal influence of all of the dependent variables, the assumption here is that they are not equal.

A neural network consists of several layers, each of which contains a number of nodes. Each node in the first layer is connected to all of the nodes in the next layer. This is repeated for subsequent layers. The number of nodes in the first layer is equal to the number of independent variables; and values for the attributes of the independent variables are entered into the nodes in the first layer. Each node connection is then weighted.

Based on the values of the input variables, the connections in the network, and the weights assigned to the nodes, the output values for the dependent variables can be calculated. The calculated outputs are then compared with known values for the dependent variables determined from the database. If the output values and known values differ, the weights for nodes in the network are adjusted. This process, called network learning, continues until the output of the network reflects the output that is known for particular data inputs. In use, the “learned” weights are kept fixed and the values of the input variables are changed thereby allowing an examination of the impact of different strengths for the independent variables.

Neural networks and related “learning” methods have been developed for use in aspects of health care where the amount and kind of data available require unconventional approaches. Examples include prediction and control in neurosurgical intensive care, decision support in acute abdominal pain, automatic detection of emphysema, diagnosis of acute appendicitis, and predicting clinical outcomes for neuroblastoma patients (Eich et al., 1997; Friman et al., 2002; Pesonen, 1997; Swiercz et al., 1998; Wei et al., 2004).


Systems-control tools are primarily used to ensure that processes are operating within their prescribed limits, thereby reducing errors and improving the use of resources. Controlling systems requires a clear understanding of performance expectations and the operating parameters that affect the achievement of those expectations. Control, therefore, depends on measuring parameters and adjusting them to achieve the desired operating levels. Robust control systems require that process and outcome data be collected and made accessible in real time so that operators can make timely and appropriate decisions that will improve system quality and increase productivity. Obviously, some of the tools discussed in the sections on design and analysis tools are also applicable to systems control (e.g., FMEA can be used to identify key process variables).

The control of a complex system must be based on a comprehensive understanding of interactions among the elements in the system and taking actions necessary to ensure smooth operation. Because health care delivery depends largely on human intervention, the control of the system depends on the proper allocation of critical manpower. Although systems-control tools are most often used at the care team and organizational levels, the principles underlying systems control can also be relevant to individual patients actively participating in their own treatment (e.g., to ensure the regular administration of drugs or treatment or the measurement of vital signs) (Table 3-5).

TABLE 3-5. Systems-Control Tools.


Systems-Control Tools.

Statistical Process Control

With statistical process control (SPC), a provider of a given procedure can know if that procedure is within acceptable limits, and, if not, whether corrective actions should be taken. Effective clinical practice depends on the correct interpretation of data, whether the data relate to a patient's blood glucose level or the time between a patient's heart attack and the administration of thrombolytics. Data can measure the quality and outcome of an action most effectively when it is displayed over time. The most basic method of displaying data over time is the run chart, in which data points are plotted in a graph against two variables represented on the X and Y axes (Figure 3-1). The goal line across the bottom is added as a vantage point from which to judge performance.

FIGURE 3-1. A run chart showing blood coagulation levels as a function of time.


A run chart showing blood coagulation levels as a function of time. Source: IHI, 2003.

A clinician attempting to determine whether a patient's blood glucose level is under control (or stable) over time by direct sampling is confronted with the problem of determining how an individual result should be interpreted. Because a single observation is subject to some variability, a single measurement that differs significantly from the mean may signal a problem with the patient, or it may be the result of a statistical fluctuation. Determining which of these is correct could require a large number of observations.

The purpose of SPC is to allow a clinician to determine the status of the variable—with a limited number of measurements under the circumstance that the variable is subject to random fluctuations. Many patient variables must be managed over time, such as blood glucose level, blood pressure, and prothrombin time. SPC can be a critical tool for helping a clinician analyze data quickly (1) to determine if the process being measured is under control, that is, if fluctuations are the result of random events or a systematic change, and (2) to ensure that the process of care can lead to the desired outcome, such as stable blood coagulation levels within the specifications established by best practices. Although these two functions are related, it is important to note that a process can be under control but still lead to an undesired outcome. For instance, blood glucose levels may be very stable at a high level, say 250, where it could be said to be under control but could still lead to an inappropriate patient outcome.

Control Chart

The control chart provides a way of detecting whether a process is under control. A limited number of measurements are made over time, and the mean and range are then calculated (Figure 3-2). The acceptable variation in the process is designated by an upper control limit (UCL) and a lower control limit (LCL), which are calculated based on the range and the mean of the measurements. The UCL is generally three standard deviations above the mean, and the LCL is essentially three standard deviations below the mean.

FIGURE 3-2. Control chart showing the percentage of INRs (a measure of blood coagulation) within 0.


Control chart showing the percentage of INRs (a measure of blood coagulation) within 0.5 of the desired range. Source: IHI, 2003.

When data vary within that range, the variation is typically due to common causes. Data points outside the control limits signal a special cause and indicate the likelihood that something in the care process has fundamentally changed. Sometimes the change is intended by the clinician—for example, a change in dosage that dramatically lowers a patient's blood pressure or body temperature. The control chart in Figure 3-2 shows the percentage of INRs (a measure of blood coagulation) within 0.5 of the desired range. Fluctuations are between the UCL and LCL, and no points fall outside that range. Therefore, these variations appear to be from common causes, and the process of care appears to be stable. This may or may not be the result that was planned. For example, if the goal is for INRs to be within 0.5 of the desired range for 90 percent of patients, the process of care would have to be improved.


Optimizing the scheduling of personnel (e.g., the nursing staff) is critical to the performance of a system. Scheduling is basically an operations method of matching supply and demand to achieve desired goals or objectives. Tools are available to accomplish this, even when the available resources are limited. Scheduling can help a system make the best use of its personnel, facilities, and inventories. Scheduling can also help “smooth out” demands, such as inpatient arrivals, outpatient arrivals, requests for testing, and so on.

Optimal, or even efficient, scheduling is one way manufacturing and service industries reduce costs and at the same time improve quality and safety. Effective scheduling has several basic requirements:

  • a thorough understanding of work processes, work, workload, and work flow
  • a complete analysis of the specific steps and sequences of work
  • an assessment of available technologies and the creation of new technologies to reduce costs and/or improve quality
  • a good forecast of future demands
  • appropriate sizing of staff, inventories, and facilities to meet demands
  • the smoothing out of variations in demand and work processes
  • the avoidance of congestion and bottlenecks

Scheduling models have been used in several areas of health care delivery:

  • inpatient scheduling in acute and long-term care settings
  • outpatient and clinic scheduling
  • workforce scheduling in hospitals, home health care, long-term care facilities, and clinics
  • ambulance and emergency-vehicle scheduling
  • scheduling for planning and acquisition of facilities and technology capacity
  • scheduling for pharmacy, laboratory, radiology, house-keeping, food services, and other departments in an institution

Costs can be reduced and quality and safety improved through proper scheduling of patients, personnel, equipment, facilities inventories, and other assets. Before scheduling can begin, however, key processes must be analyzed and optimized, and work, workload, and forecasted demands must be measured.

Forecasting Demand

Forecasts require descriptions of past levels of demand by categories/products and projections of future demands. In some cases, a simple average of past demands on a system may be used as a forecast. In other cases, probability distributions of past events are used to predict the nature of future events. For hospitals, demands may change in cyclical, seasonal, or just random patterns; changes may be in hourly, daily, weekly, or monthly demands for hospital beds, operating rooms, or emergency care. Although random fluctuations in demand are unavoidable, trends and/or cycles or patterns of demand can be relatively predictable.

Assessing Workforce Size

Assessing workforce size is a complex process that involves: (1) multiple categories of patients with different requirements for care; (2) service standards in patient care; (3) multiple levels of nursing skills; and (4) variability in times per day of patient care and variability in numbers of patients.

Setting Service Standards

In manufacturing, service standards involve on-time delivery, minimization of defective products, and warranties and guarantees. In non-health service industries, service standards usually involve providing service at acceptable levels of quality for a given price. In health care, some standards can be set easily, such as correct medications at appropriate times. Other standards, such as the type and frequency of interventions to prevent disease or improve health and the quality of life, may be difficult to set.

Assessing Workforce Size-and-Skill Mix

In assessing workforce size, work must be organized to meet an average requirement, designed to accommodate natural variations from the average, and designed to ensure that the necessary number and mix of people is available to provide the desired level of service. One can easily estimate the mean and standard deviation of demand, but capacity decisions based on average demand do not account for demand that is higher or lower than average. Failing to satisfy requirements or having excess capacity can be very costly.

Personnel Scheduling

In many ways, planning and scheduling health care personnel is conceptually similar to scheduling for personnel in other sectors. In some important ways, however, the problems in health care are more complex (Mullinax and Lawley, 2002). First, interrelations among highly trained and skilled personnel who must be available at appropriate times for different patients must be scheduled. Second, it is frequently difficult to measure quality of work, especially in terms of successful patient outcomes (see Box 3-4).

Box Icon

BOX 3-4

Nursing Assignments in a Neonatal Intensive Care Nursery. Intensive care nurseries provide health care for critically ill newborn infants. During a typical shift, infants range from those that need only occasional care to those that require constant attention. (more...)

Hershey et al. (1981) conceptualized the staffing process for nursing as a hierarchy with three decision levels (corrective allocations, shift scheduling, and workforce planning) operating over different time horizons and at different levels of precision. Corrective allocations are made daily, shift schedules are the days-on/days-off work schedules for each nurse for four to eight weeks ahead, and workforce plans are quarterly, semiannual, or annual plans of nursing needs by skill level. Because of time lags, workforce planning must be done early to meet anticipated long-term fluctuations in demand and supply.

Effective shift scheduling (i.e., scheduling that meets the health care needs of patients and satisfies the preferences of nurses at minimal cost) is a complex problem that has attracted the interest of operations researchers. The earliest and simplest scheduling model, the cyclic schedule, repeats a fixed pattern of days on and off for each nurse indefinitely into the future but cannot make adjustments for forecasted changes in workload, extended absences, or the scheduling preferences of individual nurses. Rigid schedules place heavy demands on the corrective allocations and workforce planning levels to avoid excessive staffing (Hershey et al., 1981). In flexible scheduling, the preferences of staff are considered in scheduling decisions (Miller et al., 1976; Warner, 1976). More complicated mathematical programs (e.g., simulation and mixed-integer program techniques) have been used to schedule other personnel (Tzukert and Cohen, 1985; Vassilacopoulos, 1985).

Improving Hospital Flow

Busy emergency departments must handle three inflows of patients: (1) patients who need emergency service but do not require admission to the hospital; (2) patients who need emergency services and do require admission; and (3) patients who do not need emergency care but use the emergency room as their primary source of health care. When the emergency room is the patient's primary destination and admission to the hospital is not required, segmentation and queuing methods, as described previously, can be extremely helpful in shortening waiting times and delays.

Historically, groups 1 and 3 have been at the mercy of group 2. From 1997 through 2000, the Institute for Healthcare Improvement worked with emergency departments at 91 hospitals, representing a total of 2.6 million visits per year, to reduce waiting times and delays and increase patient satisfaction (IHI, 2003). The hospitals experimented with a fast track for patients who met specified criteria. Testing and measurement showed that 83 percent of all patients use the emergency departments between 9:00 a.m. and 1:00 a.m. A fast track that allowed 46 percent of these patients to be seen resulted in an improvement of up to 30 percent in lengths of stay and patient volume.

The remaining 17 percent of patients who required admission to the hospital presented the greatest challenge to hospital flow. Most often, emergency departments divert some of these patients to other hospitals because their hospitals do not have the space to move patients forward (Committee on Government Reform, 2001; GAO, 2003). Moreover, critical shortages in intensive-care beds have led to an increasing number of ambulance diversions and prolonged stays in emergency departments (Besinger and Stapczynski, 1997; Goldberg, 2000).

Addressing this problem requires a system-wide approach that includes the flow of inpatient beds. Otherwise, techniques to manage emergency department flow will have a limited effect on hospital diversion rates and will not address the problem of patients being “boarded” in emergency departments.

Queuing methods that are effective for emergency department arrivals are also ideal for unscheduled patients. They are not as appropriate for scheduled patients. Health care organizations must deal with both scheduled and unscheduled patients. Queuing theory would suggest, therefore, that separate tracks be developed for scheduled and unscheduled patients. For instance, one primary-care clinic, St. John's Regional Health Center in Springfield, Missouri, created separate slots for scheduled and unscheduled patients by setting aside one operating room for unscheduled emergent cases. With this simple maneuver, they increased the number of surgical cases handled during normal business hours by 5.1 percent and reduced after-hours procedures by 45 percent. As a result, surgeons realized a 4.6 percent increase in revenue (IHI, 2003).

Patient Scheduling


Scheduling of patients in clinics for outpatient services is one of the earliest documented uses of operations research to improve health care delivery. Bailey (1975) applied queuing theory to equalize patients' waiting times in hospital out-patient departments based on original work done in 1952. He observed that many outpatient clinics are essentially a single queue with single or multiple servers. The problem then becomes creating an appointment system that minimizes patient waiting time and keeps servers busy.

The three most commonly used scheduling systems involve variations on block scheduling, modified block scheduling, and individual scheduling. In block scheduling, all patients are scheduled for one appointment time, for instance, 9:00 a.m. or 1:00 p.m. They are then served on a first-come, first-served basis. In modified block scheduling, the day is divided into smaller blocks (e.g., the beginning of each hour), and smaller blocks of patients are scheduled for those times, which decreases patient waiting time. By contrast, in individual scheduling systems, which are commonly used in the United States, patients are scheduled for specific times throughout the day, often depending on staff availability.

The extensive literature on outpatient scheduling began in the 1950s and peaked in the 1960s and 1970s. Because many studies were based on queuing or simulation, parametric distributions were determined for patient service times. Scheduling schemes to reduce patient waiting time without increasing physician idle time were analyzed using these distributions as inputs (Callahan and Redmon, 1987; Fries and Marathe, 1981; O'Keefe, 1985; Vissers and Wijngaard, 1979).


Inpatient scheduling has three major dimensions: (1) scheduling of elective admissions and emergency admissions into appropriate units of the hospital each day; (2) daily scheduling of inpatients to appropriate care units in the hospital for treatment or diagnosis; and (3) scheduling discharges of patients to their homes or other institutions. Clearly, these scheduling activities are linked and depend on many characteristics of the patients and the hospital. The models used for inpatient scheduling are more complex and require more data and better information systems than models for outpatients. Many different methodologies might be used based on queuing models.

For scheduling admissions, queuing and simulation models are most often used. Early examples include a model of a five-operating room, 12-bed, postanesthesia care unit (Kuzdrall et al., 1981). Trivedi (1980) describes a stochastic model of patient discharges that could be used to help regulate elective admissions and meet occupancy goals. Other authors who have addressed this topic are Cohen et al. (1980); Green (2004); Hershey et al. (1981); Kao (1974); Kostner and Shachtman (1981); and Weiss et al. (1982).

Improving Overall Organizational Performance

In addition to the tools described above, businesses, companies, and industries have found a number of other ways to improve their performance and the quality of their products and services. Three examples are described below.

The Baldrige National Quality Program

The Malcolm Baldrige National Quality Award was created in 1987 to improve U.S. industrial competitiveness and encourage the pursuit of quality in all sectors of the economy. The Baldrige National Quality Program, a public-private partnership, presents awards to large manufacturing companies, small businesses, service organizations, educational organizations, and health care providers that demonstrate major improvements in the quality of their products or services by reengineering processes, adopting continuous improvement approaches, involving employees in decision making, analyzing the operation of all elements of the enterprise, and measuring and controlling operations to optimize performance. In comparing the overall performance of units that have won this award with their competition, it is clear that quality has been improved in many economic sectors. The national recognition of the Baldrige award has motivated many organizations to improve their performance and the quality of their products and services (NIST, 2005).

Toyota Production System

In the early 1950s, Toyota introduced a variety of procedures that ultimately became known as the Toyota Production System. The system is designed to bring problems to light, resolve them, and improve the overall system to ensure that problems are not repeated. With this combination of procedures and processes, Toyota has become the leader in production efficiency and a producer of very high quality products. Toyota's ultimate goal is “defect-free operations” (Spear and Bowen, 1999). The reduction of waste, just-in-time inventory control, and the empowerment of individuals to contribute to continuous improvement in performance are just some aspects of Toyota's system that are applicable to health care delivery (see Bowen in this volume and Monden, 1983).

Six Sigma Method

The quality of a final product or process depends on many factors, including the complexity of the product and the controls in place at each step of production. Motorola introduced the concept of Six Sigma quality with the objective of creating a manufacturing operation that generates only two defective parts per billion; a defective part is defined as one with performance outside its design specifications. However, because the mean value for the key parameter that characterizes the operating system frequently drifts, the number of defective parts in practice is generally assumed to be approximately 3 to 4 parts per million (Harry, 1988).

Full-time Six Sigma project managers are given formal classroom training in process analysis and statistical methods and are mentored by experts in the Six Sigma method. In some cases, they have focused on specific departments or processes, and, in other cases, the method has been used on an enterprise-wide basis to achieve a cultural transformation (Pexton, 2005).

Applicability to the Health Care System

The quality improvement programs described above, which use a range of systems-engineering tools and innovative management practices, were developed more than two decades ago largely for the manufacturing sector. Only very recently have they begun to be used to improve performance in the health care sector. Nevertheless, the adoption of these and related tools and strategies by a small but growing number of health care provider organizations has demonstrated their potential for improving all six dimensions of health care quality as defined by IOM.

A recent study by the Pittsburgh Regional Health Initiative describes a systems approach to redesigning work systems. In one study, the goal was to eliminate central-line-associated bloodstream infections using techniques like those practiced at Toyota. By using simple tools and devices, the number of infections transmitted was significantly reduced, and general procedures were subsequently changed accordingly (Shannon et al., in progress).

Many of the most common challenges addressed by the Six Sigma method are the same as the challenges facing health care (e.g., safety, technology optimization, market growth, resource utilization, length of stay, and throughput). Defects in health care might be the number of two-year-olds not completely immunized (per million two-year-olds in the population), the number of pregnant women who do not receive prenatal care in the first trimester (per million pregnancies), or the number of patients with clinical depression who are not diagnosed (per million patients with depression) (Chassin, 1998).

A number of approaches have been undertaken by medical professionals in recent decades to apply systems thinking to the problems of safety and quality, including actions to change the behavior of health professionals and patients (e.g., making changes in strategies and the division of labor) to improve system performance. For example, the Chronic Care Model developed by Dr. Ed Wagner of the Group Health Cooperative of Puget Sound identifies six areas of interconnected activity necessary for the management of patients with chronic disease. The model encourages interactions between care provider teams and chronic care patients and their families, who are trained and equipped to participate actively in the care delivery process (see Box 3-5). Batalden et al. (2003a,b) have documented and promoted “success characteristics” of clinical microsystems—small, functional, frontline units that provide most health care to most people (see also Godfrey et al., 2003; Huber et al., 2003; Kosnik and Espinosa, 2003; Mohr et al., 2003; Nelson et al., 2002, 2003; Wasson et al., 2003).

Box Icon

BOX 3-5

The Chronic Care Model. The Chronic Care Model, developed by Dr. Ed Wagner, director of the MacColl Institute for Healthcare Innovation at the Group Health Cooperative of Puget Sound, is based on the premise that good outcomes in health care (e.g., better (more...)

The Institute for Healthcare Improvement (IHI) has engaged large numbers of individuals and institutions in carrying out change focusing on improving many levels of the present system and using some of the systems tools described above (Box 3-6). Although achieving some of their goals has proven to be difficult, many important lessons have been learned, and significant efforts have been made to disseminate these lessons.

Box Icon

BOX 3-6

Institute for Healthcare Improvement. The Institute for Healthcare Improvement (IHI), a not-for-profit research center, was established in 1991 by Dr. Donald Berwick for the purpose of improving the quality and efficiency of health care. IHI's 15-member (more...)


The systems tools described in this chapter can be applied to all four levels of the health care system, with the caveat that they must be adapted to the specific conditions and circumstances of this unique patient-centered environment.

Patient Level

In the past, systems tools have not been widely applied to individual patients, but they should be. The ultimate purpose of using these tools should be to improve patient care and ensure that the system is responsive to patients' needs and wishes. Concurrent engineering tools like QFD can be used most effectively in the design/redesign of care delivery systems in the hospital and ambulatory clinics and, as information/ communications technologies advance, in virtual settings, such as patients' homes. Human-factors expertise focused on care provider-patient relationships can help modify care instructions to ensure that they are meaningful to patients and encourage patients to participate in care processes. Indeed, human-factors engineering will be critical in moving toward remote care delivery and viable self-care systems, ensuring the usability and reliability of information/ communications systems and other systems patients will have to use for professionally guided, self-instructed care in their homes, and maintaining communications and relationships of trust with care providers.

Modeling and simulation tools can be used to improve patient access to care providers (e.g., more efficient scheduling of appointments), reduce patient waiting times in care centers, and ensure that laboratory test results are available on demand. Patients will also benefit directly from improved scheduling of personnel, from the development of predictive models for treating particular diseases, and from improved regimes for administering medication.

The use of systems tools at the patient level will require detailed data on patient flows, delay times, and service times by caregivers, laboratories, support staff, and so on. Some of these data can be collected from computer records, but much of it will require individual measurements of, for example, time spent in accomplishing various tasks. Significant differences among facilities will require that data be collected for particular environments. One advantage of systems tools is that they are sufficiently general that they can be applied in very diverse environments.

Frontline Care Team Level

In this section, we highlight the benefits of these same tools for caregiver teams. Benefits to caregivers and patients lead, in turn, to benefits for organizations and the overall health care environment by improving the efficiency of operations throughout the entire system. A health care system designed to meet the needs and wants of both patients and caregiver teams can provide a smoothly operating environment that is best for both caregivers and patients.

Human factors might be used to assess the effectiveness of cross-checks among care groups. Analyses that can reveal where a system can fail, either by predicting errors or by identifying inefficiencies, generally depend more on interactions among individuals who work in the system and understand all of its aspects and components than on large amounts of data. However, modeling and simulation tools do require good data. These tools can focus on improving the clinical and administrative operation of a practice, including the scheduling of personnel, the allocation of physical resources, and the reduction or elimination of tasks that require substantial time but may be of limited value to the team or the patient. Simulation of an operating room can improve the organization of facilities, personnel, and supplies to ensure the highest level of safety and effectiveness. The simulation of nurses' stations can ensure that supplies are available when needed and that support is provided to reduce unnecessary tasks. These analyses can also identify ways of automating some tasks and reducing unnecessary repetitions of tasks (e.g., data entries).

Modeling and simulation of back-office operations can help reduce the time spent by physicians and nurses in data recording and improve communications with patients. The proper scheduling of team members can reduce overload and improve the quality of the workplace for the team as a whole. The data for some of these analyses must be collected locally through detailed observations. These data can then be supplemented with data from a comprehensive information technology system designed to provide detailed records of events, personnel, and resources.

Enterprise-management tools address interactions between the caregiver team and the enterprise. Supply-chain management is intended to reduce inventory and ensure that needed supplies are available when required. It can also reduce inventory costs without compromising the availability of the means and personnel to handle emergencies. The significant data necessary for these analyses can involve a number of operating units of the system. Experience in other industries suggests that these data needs can only be provided by an information system that connects all elements of the enterprise.

Game-theory tools, contracts, and system-dynamic models can enable caregiver teams to explore “what-if” questions to predict the consequences of taking very different actions, such as the consequences of a major emergency or different ways of managing and controlling large fluctuations that might be introduced into a local system. For example, what actions should be taken if an emergency room is suddenly overburdened? How should nurses be allocated if only 10 percent are unavailable on a given day? How should priorities be set for using an operating room?

Optimization tools for decision making can help answer the same questions. Longer-term efforts to optimize the care team's efforts can be addressed by predictive, rather than descriptive, models. Predictive models, such as neural networks, require an understanding of the causes and effects of unexpected changes in the operational environment. The data requirements for predictive analyses are complex and require historical knowledge of the operation of the care team, as well as information about the operation of the enterprise, at least as it affects the care team. Large-scale databases on patients, diseases, and treatments are also necessary. Collecting the necessary data for these analyses without a comprehensive information system would be practically impossible. Even if it could be done, the cost would be exorbitant.

Organizational Level

At the organizational level, analyses and other systems approaches become more complex. Analyses and other studies at this level must address interactions among many elements of a system. Questions may relate to cost, overall organizational efficiency, trade-offs among departments, and organizational responses to major emergencies. Human-factors studies might be used to ensure that new software-intensive systems promote continuity of care (e.g., avoid fragmentation and complexity).

Health care provider organizations have the large, complex task of providing all of the support functions for both clinical care (e.g., radiology, laboratories, operating rooms, etc.) and infrastructure (e.g., finance, administration, accounting, etc.). In the current health care system, clinical and infrastructural needs are addressed separately. Although each clinical support function and each infrastructural need requires a high level of reliability and standardization, a truly patient-centered system will require high-performance systems at all levels.

At the organizational level, some of the more traditional engineering approaches (e.g., supply-chain management) are readily applicable. Indeed, some of the larger health care institutions have already adopted them. Systems-engineering techniques are critical to analyzing data and using modeling and simulation strategies to improve outcomes (e.g., interactions among reimbursement policies, regulations, improved care, etc.). All of these tools (i.e., systems tools, analysis, modeling, and simulation) are applicable, not only at this level, but also at the environmental level.

Data needs for these analyses can place a heavy burden on information systems, and data must be available on activities outside the boundaries of the organization (e.g., IPAs, drug suppliers, rehabilitation centers, emergency response units, etc.). To meet these needs, information systems will require interconnectivity of various elements of the overall health care delivery system.

Environmental Level

Questions at this level concern overall trends and system responses, such as regulation and oversight, reimbursement strategies, cost trends for the treatment of various diseases, the supply of caregivers, the availability of evidence-based medical information, research on the development of predictive models, and system responsiveness to major outbreaks of disease. The data requirements for addressing these and other high-level system questions depend on the issue being investigated, but, in general, information must be available from a host of institutions and organizations. To ensure that information from these many sources is available, there must be a comprehensive information system that facilitates communication and encourages information exchange among entities in the health care delivery system.

The use of systems engineering to investigate and improve the overall health care system will reflect an important change in the way reforms and changes are approached and a movement away from the old, entrenched cultures that have characterized the system historically. The hope is that systems-engineering tools can bring these deeply entrenched structures to the surface where they can be investigated and evaluated in terms of the needs of a twenty-first century health care delivery system.

Up to now, most health care professionals have not understood the relevance of systems-engineering tools to the safety and quality of patient-centered care. One of the objectives of this report is to encourage a conversation on this subject between the engineering community and health care professionals at all levels. Working together, these two communities can take advantage of the benefits of systems-engineering tools to manage and optimize costs; ensure high-quality, timely production processes; improve the safety and quality of care; and, ultimately, provide a truly patient-centered health care delivery system.


Significant barriers to the widespread diffusion and implementation of systems-engineering tools in health care include impediments related to inadequate information technology and economic, policy, organizational, and educational barriers.

Inadequate Information and Information Technology

In general, at the tactical or local level, data gathering and processing and associated informational needs do not present significant technical or cost barriers to the adoption of systems-engineering tools (e.g., SPC, discrete-event simulation, queuing methods). By contrast, there are significant structural, technical, and cost-related barriers at the organization, multi-organization, and environmental levels to the strategic implementation of tools for modeling and simulation, enterprise management, financial engineering and risk analysis, and knowledge discovery in databases. The use of these tools requires integrated clinical, administrative, and financial information systems (e.g., clinical data repositories, etc.) that are expensive to install and maintain, and only a relatively small number of large integrated provider organizations or networks (e.g., Veterans Health Administration, Kaiser-Permanente, Mayo Clinic, Group Health Cooperative of Puget Sound, etc.) have such information systems in place.

Without access to integrated clinical information systems, it is extremely difficult for small, independent elements of highly distributed, loosely connected care provider networks to take advantage of tactical systems tools and virtually impossible for them to take advantage of enterprise-management and other systems-analysis tools. In principle, with the advance of computerization and automation in health care delivery, the cost of capturing relevant data for design, analysis, and control of processes and systems should come down. However, the health care system does not have interoperability standards for information/communication systems that would make it possible to connect the myriad pieces of the fragmented, distributed delivery system. This absence of interoperability presents a formidable barrier to the use of strategic, data-intensive systems tools at the organizational and environment levels. (Information/ communications-related challenges to patient-centered, high-performance health care delivery are addressed at greater length in Chapter 4.)

Policy and Market Barriers

In the present system, reimbursement practices and rules, regulatory frameworks, and the lack of support for research continue to discourage the development, adaptation, and use of systems-engineering tools to improve the performance of the health care delivery system. The current “market” for health care services does not reward care providers who improve the quality of their processes and outcomes through investments in systems engineering, information/communications technologies, or other innovations (Hellinger, 1998; Leape, 2004; Leatherman et al., 2003; Miller and Luft, 1994, 2002; Robinson, 2001). The lack of comparative quality and cost data and the corresponding lack of quality/cost transparency in the market for health care services prevent patients from making informed choices on the basis of quality or value (quality/cost) (see Safran, in this volume, and Rosenthal et al., 2004). In the prevailing payment/reimbursement climate, care providers are not reimbursed on the basis of the quality of care they provide (IOM, 2001). Care providers have little incentive to invest in systems tools in support of quality improvement, unless they generate revenue directly or demonstrate immediate improvements in operating efficiency.

In recent years, several experiments with new reimbursement approaches have been tried to change the prevailing practice of reimbursing discrete units by a “reasonable cost” method to include fixed-price reimbursement for a definable bundle of services or a care episode. The object of these changes is to give providers an incentive to improve the effectiveness and efficiency of their processes and procedures. For example, the introduction of diagnostic related groups shifted the reimbursement for hospitalization to a fixed price (adjusted for regional labor costs). Severity-adjusted capitation for patients covered under the new Medicare HMO coverage applies the same principles. Some insurers have experimented with linking reimbursement explicitly to quality measures (for example, selected health care organizations may receive a fixed price for organ transplants based on quality, that is, the success rate of the procedure). These are promising first steps toward changing reimbursement to encourage high-quality, efficient care and a systems approach. However, for the vast majority of care providers, there are no such incentives.

Organizational and Managerial Barriers

Other barriers to the widespread use of systems tools in health care are related to the culture, organization, and management structure of most health care provider organizations and the lack of confidence in systems tools and technologies by those who will be called upon to use them.

As discussed in Chapter 1, cultural, organizational, and policy-related factors (e.g., regulation, licensing, etc.) have contributed to rigid divisions of labor in many areas of health care, which has impeded the widespread use of systems tools and related innovations that are likely to have significant, disruptive effects on organizational structures and work processes at all four levels of the health care system (see Bohmer this volume and Christensen et al., 2000). Organizational changes are difficult under any circumstances, and inflexibility in roles and responsibilities can increase the difficulties. There is ample documentation of tools and technologies that were poorly integrated with/accommodated by existing processes of care delivery that generated additional work for frontline providers and very little apparent reward (Boodman, 2005; Durieux, 2005; Garg et al., 2005; Wears and Berg, 2005)

Ultimately, the benefits of systems tools and technologies can only be realized if their introduction is carefully managed and the people who must use them are adequately prepared, technically and mentally, to change their work practices and organization. First, as Nelson and colleagues observed in their assessment of successful clinical microsystems and as IHI has demonstrated in its successful collaboratives, management must change its philosophy (IHI, 2005; Nelson et al., 2002). Once management is committed to change, the participation of professional caregivers can be enlisted from the outset in the analysis of processes and systems and in the design and implementation of system improvements. In short, there must be mutual trust between health care management and the health care professionals who work with management.

Educational Barriers

Prevailing approaches to the education and training of health care, engineering, and management professionals also present significant barriers to the implementation and diffusion of systems-engineering tools, information/communications technologies, and associated innovations in the health care sector. Currently, very few health care professionals or administrators are equipped to think analytically about health care delivery as a system. As a result, very few appreciate the relevance, let alone the value, of systems-engineering tools. And of these, only a fraction are equipped to work with systems engineers to tailor and apply them to the needs of the health care delivery system.

Students of engineering and management are much more likely to be trained in systems thinking and the uses and implications of systems-engineering tools and information/ communications technologies for the management and optimization of production and delivery systems. However, students in most U.S. engineering and business schools are unlikely to find courses that address operational challenges in the quality and productivity of health care delivery. (Educational barriers to the application of systems engineering to health care delivery and the steps necessary to overcome them are addressed at length in Chapter 5.)

The culture of the health care enterprise will have to undergo a seismic change, a so-called paradigm shift, for systems thinking and the health of populations to become integral factors in health care decision making. Even at that point, it will take a tremendous effort and a great deal of flexibility for organizations to implement fundamental changes based on the optimization of interactions among all elements of the system. Ultimately, the whole must be greater than the sum of its parts. To date, organizations with corporate structures and management have been most successful in accomplishing this.


Finding 3-1. The health care delivery system functions not as a system, but as a collection of entities that consider their performance in isolation. Even within a given organization (e.g., a hospital), individual departments are often isolated and behave as functional and operational “silos.”

Finding 3-2. A systems view of health care cannot be achieved until the organizational barriers to change are overcome. Management and professionals must be committed to removing silos and focusing on optimizing contributions of professionals at all levels.

Finding 3-3. Systems-engineering tools have been used to improve the quality, efficiency, safety, and/or customer-centeredness of processes, products, and services in a wide range of manufacturing and services industries.

Finding 3-4. Health care has been very slow to embrace systems-engineering tools, even though they have been shown to benefit the small fraction of health care organizations and clinicians that have used them. Most health care providers do not understand how systems engineering can help solve health care delivery problems and improve operating performance. Many do not even know the questions systems tools and techniques might address or how to take advantage of the answers Only when people trained in the use of systems-engineering tools are integral to the health care community will the benefits become fully available.

Finding 3-5. Systems-engineering tools for the design, analysis, and control of complex systems and processes could potentially transform the quality and productivity of health care. Statistical process control, queuing theory, human-factors engineering, discrete-event simulation, QFD, FMEA, modeling and simulation, supply-chain management, and knowledge discovery in databases either have been or can be readily adapted to applications in health care delivery. Other tools, such as enterprise management, financial engineering, and risk analysis, are the subjects of ongoing research and can be expected to be useful for health care in the future.

Finding 3-6. Neither the engineering community nor the health care research community has addressed the delivery aspects of health care adequately. Although clinical applications of new medicines, procedures, and devices have been widespread, improving the processes by which care is delivered has been mostly disregarded. The adaptation and improvement of existing systems tools and the creation of new tools to address health care delivery have not been primary objectives of federal agencies or public or private research institutions.

Finding 3-7. Information/communications systems will be critical to taking advantage of the potential of existing and emerging systems-design, -analysis, and -control tools to transform health care delivery. These tools can provide timely collection, analysis, and sharing of process and outcome data that would benefit all stakeholders in the enterprise. Although such systems are available in other industries, meeting the unique requirements of the health care community will require active research.

Finding 3-8. The current organization, management, and regulation of health care delivery provide few incentives for the use or development of systems-engineering tools that could lead to improvements.

Finding 3-9. The widespread use of systems-engineering tools will require determined efforts on the part of health care providers, the engineering community, federal and state governments, private insurers, large employers, and other stakeholders.


Recommendation 3-1. Private insurers, large employers, and public payers, including the Federal Center for Medicare and Medicaid Services and state Medicare programs, should provide more incentives for health care providers to use systems tools to improve the quality of care and the efficiency of care delivery. Reimbursement systems, both private and public, should expand the scope of reimbursement for care episodes or use other bundling techniques (e.g., disease-related groups, severity-adjusted capitation for Medicare Advantage, fixed payments for transplantation, etc.) to encourage the use of systems-engineering tools. Regulatory barriers should also be removed. As a first step, regulatory waivers could be granted for demonstration projects to validate and publicize the utility of systems tools.

Recommendation 3-2. Outreach and dissemination efforts by public- and private-sector organizations that have used systems-engineering tools in health care delivery (e.g., Veterans Health Administration, Joint Commission on Accreditation of Healthcare Organizations, Agency for Healthcare Research and Quality, Institute for Healthcare Improvement, Leagfrog Group, U.S. Department of Commerce Baldrige National Quality Program, and others) should be expanded, integrated into existing regulatory and accreditation frameworks, and reviewed to determine whether, and if so how, better coordination might make their collective impact stronger.

Recommendation 3-3. The use and diffusion of systems-engineering tools in health care delivery should be promoted by a National Institutes of Health Library of Medicine website that provides patients and clinicians with information about, and access to, systems-engineering tools for health care (a systems-engineering counterpart to the Library of Medicine web-based “clearinghouse” on the status and treatment of diseases and the Agency for Healthcare Research and Quality National Guideline Clearinghouse for evidence-based clinical practice). In addition, federal agencies and private funders should support the development of new curricula, textbooks, instructional software, and other tools to train individual patients and care providers in the use of systems-engineering tools.

Recommendation 3-4. The use of any single systems tool or approach should not be put “on hold” until other tools become available. Some system tools already have extensive tactical or local applications in health care settings. Information-technology-intensive systems tools, however, are just beginning to be used at higher levels of the health care delivery system. Changes must be approached from many directions, with systems engineering tools that are available now and with new tools developed through research. Successes in other industries clearly show that small steps can yield significant results, even while longer term efforts are being pursued.

Recommendation 3-5. Federal research and mission agencies should significantly increase their support for research to advance the application and utility of systems engineering in health care delivery, including research on new systems tools and the adaptation, implementation, and improvement of existing tools for all levels of health care delivery. Promising areas for research include human-factors engineering, modeling and simulation, enterprise management, knowledge discovery in databases, and financial engineering and risk analysis. Research on the organizational, economic, and policy-related barriers to implementation of these and other systems tools should be an integral part of the larger research agenda.


Information/communications systems will be critical to the effectiveness of existing and emerging systems-design, -analysis, and -control tools in the transformation of health care delivery. Information/communications systems can provide timely collection, analysis, and sharing of process and outcome data that would benefit all stakeholders in the enterprise. Although these systems are available in other industries, meeting the unique requirements of the health care community will require significant investments and active research. Near-term and long-term challenges in this area are addressed in Chapter 4.


  1. Abara J. Applying integer linear programming to the fleet assignment problem. Interfaces. 1989;19(4):20–28.
  2. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. Journal of the American Medical Informatics Association. 2004;11(2):104–112. [PMC free article: PMC353015] [PubMed: 14633936]
  3. Bailey NTJ. London, U.K: Charles Griffin and Co. Ltd.; 1975. The Mathematical Theory of Infectious Disease and Its Applications.
  4. Batalden PB, Nelson EC, Mohr JJ, Godfrey MM, Huber TP, Kosnik L, Ashling K. Microsystems in health care: Part 5. How leaders are leading. Joint Commission Journal on Quality and Safety. 2003a;29(6):297–308. [PubMed: 14564748]
  5. Batalden PB, Nelson EC, Edwards WH, Godfrey MM, Mohr JJ. Microsystems in health care: Part 9. Developing small clinical units to attain peak performance. Joint Commission Journal on Quality and Safety. 2003b;29(11):575–585. [PubMed: 14619350]
  6. Bertocci GE, Pierce MC, Deemer E, Aguel F. Computer simulation of stair falls to investigate scenarios in child abuse. Archives of Pediatrics and Adolescent Medicine. 2001;55(9):1008–1014. [PubMed: 11529802]
  7. Besinger SJ, Stapczynski JS. Critical care of medical and surgical patients in the ER: length of stay and initiation of intensive care procedures. American Journal of Emergency Medicine. 1997;15(7):654–657. [PubMed: 9375548]
  8. Boodman SG. Washington Post: March 22, 2005. Not Quite Fail-Safe; Computerizing Isn't a Panacea for Dangerous Drug Errors, Study Shows; p. F.01.
  9. Bogner MS, editor. Human Error in Medicine. Mahwah, N.J: Erlbaum; 1994.
  10. Brandeau ML. Allocating Resources to Control Infectious Diseases. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic Publishers; 2004. pp. 443–464.
  11. Brewer TF, Heymann SJ, Krumplitsch SM, Wilson ME, Colditz GA, Fineberg HV. Strategies to decrease tuberculosis in U.S. homeless populations: a computer simulation model. Journal of the American Medical Association. 2001;286(7):834–842. [PubMed: 11497538]
  12. Callahan NM, Redmon WK. Effects of problem-based scheduling on patient waiting and staff utilization of time in a pediatric clinic. Journal of Applied Behavior Analysis. 1987;20(2):193–199. [PMC free article: PMC1285971] [PubMed: 3610899]
  13. Champion V, Foster JL, Menon U. Tailoring interventions for health behavior change in breast cancer screening. Cancer Practice. 1997;5(5):283–288. [PubMed: 9341350]
  14. Chandler AP. New York: Belknap Press; 1990. Scale and Scope: The Dynamics of Industrial Capitalism. [PubMed: 17746505]
  15. Chaplin E, Mailey M, Crosby R, Gorman D, Holland X, Hippe C, Hoff T, Nawrocki D, Pichette S, Thota N. Using quality function deployment to capture the voice of the customer and translate it into the voice of the provider. Joint Commission Journal on Quality Improvement. 1999;25(6):300–315. [PubMed: 10367267]
  16. Chassin M. Is healthcare ready for Six Sigma quality. The Milbank Quarterly. 1998;76(4):565–591. [PMC free article: PMC2751107] [PubMed: 9879303]
  17. Christensen CM, Bohmer R, Kenagy J. Will disruptive innovations cure health care. Harvard Business Review. 2000 September-October:103–111. [PubMed: 11143147]
  18. Clark HH, Brennan S. Grounding in Communication. In: Resnick LB, Levine JM, Teasley SD, editors. Perspectives on Socially Shared Cognition. Washington, D.C: American Psychological Association; 1991. pp. 127–149.
  19. Cohen MA, Hershey JC, Weiss EN. Analysis of capacity decisions for progressive patient care hospital facilities. Health Services Research. 1980;15(2):145–160. [PMC free article: PMC1072154] [PubMed: 7419419]
  20. Committee on Government Reform. Washington, D.C: Special Investigations Division, Committee on Government Reform ,U.S. House of Representatives; October 16, 2001. National Preparedness: Ambulance Diversions Impede Access to Emergency Rooms.
  21. Cook RI, McDonald JS, Smalhout R. Cognitive Systems Engineering Laboratory Technical Report 89-TR-07. Columbus, Ohio: Department of Industrial and Systems Engineering, Ohio State University; 1989. Human Error in the Operating Room: Identifying Cognitive Lock Up.
  22. Davis C. Chronic Conditions Expert Host: Commentary on the Chronic Care Model. 2005. Available online at: http://www.ihi.org/IHI/Topics/ChronicConditions/ExpertHostConnieDavis.htm.
  23. Dittus RS, Klein RL, DeBrota DJ, Dame M, Fitzgerald JF. Medical resident work schedules: design and evaluation by simulation modeling. Management Science. 1996;42:891–906.
  24. Duraiswamy N, Welton R, Reisman A. Using computer simulation to predict ICU staffing needs. Journal of Nursing Administration. 1981;11(2):39–44. [PubMed: 6914371]
  25. Durieux P. Electronic medical alerts—so simple, so complex. New England Journal of Medicine. 2005;352(10):1034–1036. [PubMed: 15758015]
  26. Eddy DM, Nugent FW, Eddy JF, Coller J, Gilbertsen V, Gottlieb LS, Rice R, Sherlock P, Winawer S. Screening for colorectal cancer in a high-risk population: results of a mathematical model. Gastroenterology. 1987;92(3):682–692. [PubMed: 3102307]
  27. Eich HP, Ohmann C, Lang K. Decision support in acute abdominal pain using an expert system for different knowledge bases. Proceedings of 10th IEEE Symposium on Computer-Based Medical Systems (CBMS'97); 1997. Available online at: http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/cbms/&toc=http://csdl2.computer.org/comp/proceedings/cbms/ 1997/7928/00/7928toc.xml&DOI=10.1109/CBMS.1997.596400.
  28. Feistritzer NR, Keck BR. Perioperative supply chain management. Seminars for Nurse Managers. 2000;8(3):151–157. [PubMed: 12029750]
  29. Feltovich P, Ford K, Hoffman R, editors. Expertise in Context. Cambridge, Mass: MIT Press; 1997.
  30. Fone D, Hollinghurst S, Temple M, Round A, Lester N, Weightman A, Roberts R, Coyle E, Bevan G, Palmer S. Systematic review of the use and value of computer simulation modelling in population health and health care delivery. Journal of Public Health Medicine. 2003;25(4):325–335. [PubMed: 14747592]
  31. Forrester JW. New York: John Wiley and Sons, Inc; 1961. Industrial Dynamics.
  32. Freund D, Evans D, Henry D, Dittus RS. Implications of the Australian guidelines for the United States. Health Affairs. 1992;11(4):202–206. [PubMed: 1483641]
  33. Fries BE, Marathe VP. Determination of optimal variable-sized multiple-block appointment systems. Operations Research. 1981;29(2):324–345. [PubMed: 10253249]
  34. Friman O, Borga M, Lundberg M, Tylén U, Knutsson H. Recognizing Emphysema: A Neural Network Approach. ICPR'02 Proceedings of 16th International Conference on Pattern Recognition; August, 2002; 2002. Available online at: http://www.imt.liu.se/mi/Publications/Publications/PaperInfo/fbltk02.html.
  35. Gaba DM, Maxwell MS, DeAnda A. Anesthetic mishaps: breaking the chain of accident evolution. Anesthesiology. 1987;66(5):670–676. [PubMed: 3578880]
  36. GAO (General Accountability Office). Washington, D.C: GAO; March 14, 2003. Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities. GAO-03-460.
  37. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano P, Devereaux PJ, Beyene J, Sam J, Haynes RB. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Journal of the American Medical Association. 2005;293:1223–1238. [PubMed: 15755945]
  38. Gertz D, Baptista JPA. New York: Free Press; 1995. Grow to Be Great: Breaking the Downsizing Cycle.
  39. Godfrey MM, Nelson EC, Wasson JH, Mohr JJ, Batalden PB. Microsystems in health care: Part 3. Planning patient-centered services. Joint Commission Journal on Quality and Safety. 2003;29(4):159–170. [PubMed: 12698806]
  40. Goldberg C. Emergency crews worry as hospitals say, “no vacancy.” New York Times. 2000 December 17:A39.
  41. Gorunescu F, McClean SI, Millard PH. Using a queuing model to help plan bed allocation in a department of geriatric medicine. Health Care Management Science. 2002;5(4):307–312. [PubMed: 12437280]
  42. Green LV. Hospital Capacity Planning and Management. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic. Publishers; 2004. pp. 15–42.
  43. Harry MJ. Schaumburg, Ill: Motorola University Press; 1988. The Nature of Six Sigma Quality.
  44. Hashimoto F, Bell S. Improving outpatient clinic staffing and scheduling with computer simulation. Journal of General Internal Medicine. 1996;11(3):182–184. [PubMed: 8667098]
  45. Hauser JR, Clausing D. The house of quality. Harvard Business Review. 1988;3:63–73.
  46. Hellinger FJ. The effect of managed care on quality. Archives of Internal Medicine. 1998;158(8):833–841. [PubMed: 9570168]
  47. Hendee W, editor. Proceedings of Enhancing Patient Safety and Reducing Errors in Health Care. Annenberg Center for Health Sciences; Rancho Mirage, California. November 8–10, 1998; Chicago, Ill: National Patient Safety Foundation; 1999.
  48. Hershey J, Pierskalla W, Wandel S. Nurse Staffing Management. In: Boldy D, editor. Operational Research Applied to Health Services. London, U.K: Croom-Helm Ltd; 1981. pp. 189–220.
  49. Hollnagel E, Woods DD, Leveson N, editors. Resilience Engineering: Concepts and Precepts. Aldershot, UK: Ashgate Publishers; 2005.
  50. Howard SK, Gaba DM, Fish KJ, Yang GS, Sarnquist FH. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviation, Space, and Environmental Medicine. 1992;63(9):763–770. [PubMed: 1524531]
  51. Howard SK, Smith BE, Gaba DM, Rosekind MR. Performance of well-rested vs. highly-fatigued residents: a simulator study. Anesthesiology. 1997:A-981.
  52. Huang XM. A planning model for requirement of emergency beds. IMA Journal of Mathematics Applied in Medicine and Biology. 1995;12(3-4):345–352. [PubMed: 8919569]
  53. Huber TP, Godfrey MM, Nelson EC, Mohr JJ, Campbell C, Batalden PB. Microsystems in health care: Part 8. Developing people and improving work life: what front-line staff told us. Joint Commission Journal on Quality and Safety. 2003;29(10):512–522. [PubMed: 14567260]
  54. IHI (Institute for Healthcare Improvement). IHI White Paper. Boston, Mass: IHI; 2003. Improving flow through perioperative services: a practical application of theory.
  55. IHI. Boston, Mass: IHI; 2005. Ideas in Action: How Health Care Organizations Are Connecting the Dots between Concept and Positive Change: 2005 Progress Report. Available online at: http://www.ihi.org/NR/rdonlyres/4CE48D26-2303-4FCD-9BF5-174EA039E725/0/ProgRep020505.pdf.
  56. IOM (Institute of Medicine). Washington, D.C: National Academy Press; 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. [PubMed: 25057539]
  57. JCAHO (Joint Commission on the Accreditation of Healthcare Organizations). Oakbrook Terrace, Ill: JCAHO; 2002. Failure Mode and Effects Analysis in Health Care: Proactive Risk Reduction. [PubMed: 12512213]
  58. Johnson C. The causes of human error in medicine. Cognition, Technology and Work. 2002;4(2):65–70.
  59. Jorion P. New York: McGraw-Hill; 1997. Value at risk: the new benchmark for controlling market risk.
  60. Kao EPC. Modeling the movement of coronary patients within a hospital by semi-Markov process. Operations Research. 1974;22(4):683–699.
  61. Klein HA, Isaacson JJ. Making medication instructions usable. Ergonomics in Design. 2003;11:7–11.
  62. Klein HA, Meininger AR. Self-management of medication and diabetes: cognitive control. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans. 2004;34(6):718–725.
  63. Klein RL, Dittus RS, Roberts SD, Wilson JR. Simulation modeling in health care: an annotated bibliography. Medical Decision Making. 1993;13(4):347–354. [PubMed: 8246707]
  64. Kosnik LK, Espinosa JA. Microsystems in health care: Part 7. The microsystem as a platform for merging strategic planning and operations. Joint Commission Journal on Quality and Safety. 2003;29(9):452–459. [PubMed: 14513668]
  65. Kostner GT, Shachtman RJ. Institute of Statistics Mimeo Series 1364. Chapel Hill, N.C: University of North Carolina; 1981. A Stochastic Model to Measure Patient Effects Stemming from Hospital Acquired Infections; pp. 1209–1217. [PubMed: 10309962]
  66. Kutzler DL, Sevcovic L. Planning a nurse-midwifery caseload by a computer simulated model. Journal of Nurse-Midwifery. 1980;25(5):34–37. [PubMed: 6902766]
  67. Kuzdrall PJ, Kwak NK, Schnitz HH. Simulating space requirements and scheduling policies in a hospital surgical suite. Simulation. 1981;36(5):163–171.
  68. Lattimer V, Brailsford S, Turnbull J, Tarnaras P, Smith H, George S, Gerard K, Maslin-Prothero S. Reviewing emergency care systems I: Insights from system dynamics modeling. Emergency Medicine Journal. 2004;21:685–691. [PMC free article: PMC1726513] [PubMed: 15496694]
  69. Leape LL. Human factors meets health care: the ultimate challenge. Ergonomics in Design. 2004;12(3):6–12.
  70. Leatherman S, Berwick D, Iles D, Lewin LS, Davidoff F, Nolan T, Bisognano M. The business case for quality: case studies and an analysis. Health Affairs. 2003;22(2):17–30. [PubMed: 12674405]
  71. Letterie GS. How virtual reality may enhance training in obstetrics and gynecology. American Journal of Obstetrics and Gynecology. 2002;187(3 Suppl):S37–S40. [PubMed: 12235439]
  72. Levy DT, Cummings KM, Hyland A. A simulation of the effects of youth initiation policies on overall cigarette use. American Journal of Public Health. 2000;90(8):1311–1314. [PMC free article: PMC1446318] [PubMed: 10937017]
  73. Lucas CE, Buechter KJ, Coscia RL, Hurst JM, Meredith JW, Middleton JD, Rinker CR, Tuggle D, Vlahos AL, Wilberger J. Mathematical modeling to define optimum operating room staffing needs for trauma center. Journal of the American College of Surgeons. 2001;192(5):559–565. [PubMed: 11333091]
  74. Magazine MJ. Scheduling a patient transportation service in a hospital. INFOR. 1977;25:242–254.
  75. Mahadevia PJ, Fleisher LA, Frick KD, Eng J, Goodman SN, Powe NR. Lung cancer screening with helical computed tomography in older adult smokers: a decision and cost-effectiveness analysis. Journal of the American Medical Association. 2003;289(3):313–322. [PubMed: 12525232]
  76. McCarthy V. Strike it rich. Datamation. 1997;43(2):44–50.
  77. McDonough JE. Plymouth Meeting, Pa: ECRI; 2002. Proactive Hazard Analysis and Health Care Policy.
  78. McDonough JE, Solomon R, Petosa L. Patient Safety: Achieving a New Standard of Care. Washington, D.C: National Academies Press; 2004. Quality Improvement and Proactive Hazard Analysis Models: Deciphering a New Tower of Babel. Attachment F; pp. 471–508.
  79. McKesson. Empowering Healthcare. Healthcare Financial Management. 2002. Available online at: http://www.findarticles.com/p/articles/mi_m3257/is_1_56/ai_82067693.
  80. Miller HE, Pierskalla WP, Rath GJ. Nurse scheduling using mathematical programming. Operations Research. 1976;24:856–870.
  81. Miller RH, Luft HS. Managed care plan performance since 1980: a literature analysis. Journal of the American Medical Association. 1994;271(19):1512–1519. [PubMed: 8176832]
  82. Miller RH, Luft HS. HMO plan performance update: an analysis of the literature, 1997–2001. Health Affairs. 2002;21(4):63–86. [PubMed: 12117154]
  83. Mitchell TM. Does machine learning really work. AI Magazine. 1997;18(3):11–20.
  84. Mohr JJ, Barach P, Cravero JP, Blike GT, Godfrey MM, Batalden PB, Nelson EC. Microsystems in health care: Part 6. Designing patient safety into the microsystem. Joint Commission Journal on Quality and Safety. 2003;29(8):401–408. [PubMed: 12953604]
  85. Monden Y. Norcross, Ga: Industrial Engineering and Management Press, Institute of Industrial Engineers; 1983. Toyota production system: practical approach to production management.
  86. Mullinax C, Lawley M. Assigning patients to nurses in neonatal intensive care. Journal of the Operational Research Society. 2002;53(1):25–35.
  87. Murray M, Berwick DM. Advanced access: reducing waiting and delays in primary care. Journal of the American Medical Association. 2003;289(8):1035–1040. [PubMed: 12597760]
  88. Neilson AR, Whynes DK. Cost-effectiveness of screening for colorectal cancer: a simulation model. IMA Journal of Mathematics Applied in Medicine and Biology. 1995;12(3-4):355–367. [PubMed: 8919570]
  89. Nelson EC, Batalden PB, Huber TP, Mohr JJ, Godfrey MM, Headrick LA, Wason JH. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Joint Commission Journal on Quality Improvement. 2002;28(9):472–493. [PubMed: 12216343]
  90. Nelson EC, Batalden PB, Homa K, Godfrey MM, Campbell C, Headrick LA, Huber TP, Mohr JJ, Wasson JH. Microsystems in health care: Part 2. Creating a rich information environment. Joint Commission Journal on Quality and Safety. 2003;29(1):5–15. [PubMed: 12528569]
  91. Nemeth C, Cook RI, Woods DD. Messy details: insights from the study of technical work in healthcare. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans. 2004;34(6):689–692.
  92. Ness RM, Holmes A, Klein R, Dittus RS. Cost-utility of one-time colonoscopic screening for colorectal cancer at various ages. American Journal of Gastroenterology. 2000;95(7):1800–1811. [PubMed: 10925988]
  93. Ness RM, Klein RW, Dittus RS. The cost-effectiveness of fecal DNA testing for colorectal cancer. Gastrointestinal Endoscopy. 2003;57(5):AB94–AB94.
  94. NIST (National Institute of Standards and Technology). Baldrige National Quality Program. 2005. Available online at: http://www.quality.nist.gov.
  95. Norman DA. New York: Basic Books; 1988. The Psychology of Everyday Things.
  96. Norman DA. Reading, Mass: Addison-Wesley; 1993. Things That Make Us Smart.
  97. Nyssen AS, De Keyser V. Improving training in problem solving skills: analysis of anesthetists' performance in simulated problem situations. Le Travail Humain. 1998;61(4):387–402.
  98. O'Keefe RM. Investigating outpatient departments: implementable policies and qualitative approaches. Journal of the Operational Research Society. 1985;36(8):705–712. [PubMed: 10272814]
  99. O'Neill L, Dexter F. Evaluating the Efficiency of Hospitals' Perioperative Services Using DEA. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic Publishers; 2004. pp. 147–168.
  100. Ozcan YA, Merwin E, Lee K, Morrissey JP. State of the Art Applications in Benchmarking Using DEA: The Case of Mental Health Organizations. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic Publishers; 2004. pp. 169–190.
  101. Patterson ES, Cook RI, Render ML. Improving patient safety by identifying side effects from introducing bar coding in medication administration. Journal of the American Medical Informatics Association. 2002;9(5):540–553. [PMC free article: PMC346641] [PubMed: 12223506]
  102. Patterson ES, Cook RI, Woods DD, Chow R, Gomes JO. Hand-off strategies in settings with high consequences for failure: lessons for health care operations. International Journal for Quality in Health Care. 2004a;16(2):125–132. [PubMed: 15051706]
  103. Patterson ES, Cook RI, Woods DD, Render ML. Examining the complexity behind a medication error: generic patterns in communication. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans. 2004b;34(6):749–756.
  104. Pesonen E. Is neural network better than statistical methods in diagnosis of acute appendicitis. Studies in Health Technology Information. 1997;43:377–381. [PubMed: 10179576]
  105. Pexton. One Piece of the Patient Safety Puzzle: Advantages of the Six Sigma Approach. Patient Safety and Quality Healthcare. January/ February, 2005. Available online at: http://www.gehealthcare.com/usen/service/docs/patientsafetypuzzle.pdf.
  106. Phillips AN, Youle M, Johnson M, Loveday C. Use of a stochastic model to develop understanding of the impact of different patterns of antiretroviral drug use on resistance development. AIDS. 2001;15(17):2211–2220. [PubMed: 11698693]
  107. Pierskalla WP. Blood Banking Supply Chain Management. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic Publishers; 2004. pp. 103–146.
  108. Pritsker AB. Life and death decisions: organ transplantation allocation policy analysis. OR/MS Today. 1998 August:22–28.
  109. Ray WA, Murray KT, Meredith S, Narasimhulu SS, Hall K, Stein CM. Oral erythromycin and the risk of sudden death from cardiac causes. New England Journal of Medicine. 2004;351(11):1089–1096. [PubMed: 15356306]
  110. Reichheld F. Boston: Harvard Business School Press; 1996. The Loyalty Effect.
  111. Reinus WR, Enyan A, Flanagan P, Pim B, Sallee DS, Segrist J. A proposed scheduling model to improve use of computed tomography facilities. Journal of Medical Systems. 2000;24(2):61–76. [PubMed: 10895421]
  112. Reisman A, Cull W, Emmons H, Dean B, Lin C, Rasmussen J, Darukhanavala R, George T. On the design of alternative obstetric anesthesia team configurations. Management Science. 1977;23:545–556.
  113. Robinson JC. Theory and practice in the design of physician payment incentives. Milbank Quarterly. 2001;79(2):149–177. [PMC free article: PMC2751195] [PubMed: 11439463]
  114. Rosenthal MB, Fernandopulle R, Song HR, Landon B. Paying for quality: providers' incentives for quality improvement. Health Affairs. 2004;23(2):127–141. [PubMed: 15046137]
  115. Schaefer AJ, Bailey MD, Shechter SM, Roberts MS. Medical Treatment Decisions Using Markov Decision Processes. In: Brandeau ML, Sainfort F, Pierskalla WP, editors. Operations Research and Health Care: A Handbook of Methods and Applications. Boston, Mass: Kluwer Academic Publishers; 2004. pp. 595–614.
  116. Shannon RP, Frndak D, Lloyd J, Grunden N, Herbert C, Patel B, Cummings D, Shannon A, O'Neill P, Spear S. Cambridge, Mass: Harvard Business School Publishing; Eliminating Central Line Infections in Two Intensive Care Units: Results of Real-time Investigation of Individual Problems. In progress. Harvard Business School Working Paper.
  117. Shina SG. Boston, Mass: Kluwer Academic Publishers; 1991. Concurrent Engineering and Design for Manufacture of Electronic Products.
  118. Siddharthan K, Jones WJ, Johnson JA. A priority queuing model to reduce waiting times in emergency care. International Journal of Health Care Quality Assurance. 1996;9(5):10–16. [PubMed: 10162117]
  119. Spear SJ, Bowen HK. Decoding the DNA of the Toyota production system. Harvard Business Review. 1999 Sept.-Oct.:96–106.
  120. Sterman JD. New York: Irwin McGraw-Hill; 2000. Business Dynamics: System Thinking and Modeling for a Complex World.
  121. Sullivan LP. Quality function deployment. Quality Progress. 1986;19(6):39–50.
  122. Sullivan LP. Policy management through quality function deployment. Quality Progress. 1988;21(6):18–20.
  123. Swiercz M, Mariak Z, Lewko J, Chojnacki K, Kozlowski A, Piekarski P. Neural network technique for detecting emergency states in neurosurgical patients. Medical and Biological Engineering and Computing. 1998. pp. 717–22. Available online at: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10367462&dopt=Abstract. [PubMed: 10367462]
  124. Teng TO, Osgood ND, Lin TH. Public health impact of changes in smoking behavior: results from the Tobacco Policy Model. Medical Care. 2001;39(10):1131–1141. [PubMed: 11567175]
  125. Thomas JA, Martin V, Frank S. Improving pharmacy supply-chain management in the operating room. Healthcare Financial Management. 2000;54(12):58–61. [PubMed: 11141689]
  126. Trivedi VM. A stochastic model for predicting discharges: applications for achieving occupancy goals in hospitals. Socio-Economic Planning Sciences. 1980;14(5):209–215. [PubMed: 10248759]
  127. Tsay AA, Nahmias S. Modeling Supply Chain Contracts: A Review. In: Tayur S, Magazine M, Ganeshan R, editors. Quantitative Models for Supply Chain Management. Boston, Mass: Kluwer Academic Publishers; 1998. pp. 299–336.
  128. Tzukert A, Cohen MA. Optimal student-patient assignment in dental education. Journal of Medical Systems. 1985;9(5-6):279–290. [PubMed: 4093733]
  129. Vassilacopoulos G. Allocating doctors to shifts in an accident and emergency department. Journal of the Operational Research Society. 1985;36(6):517–523. [PubMed: 10273861]
  130. Vissers J, Wijngaard J. The outpatient appointment system: design of a simulation study. European Journal of Operational Research. 1979;3:459–463.
  131. Wagner EH. Chronic disease management: what will it take to improve care for chronic illness? Effective Clinical Practice. 1998;1:2–4. [PubMed: 10345255]
  132. Walensky RP, Goldie SJ, Sax PE, Weinstein MC, Paltiel AD, Kimmel AD, Seage GR, Losina E, Zhang H, Islam R, Freedberg KA. Treatment of primary HIV infection: projecting out-comes of immediate, interrupted, or delayed therapy. Journal of Acquired Immune Deficiency Syndromes. 2002;31(1):27–37. [PubMed: 12352147]
  133. Warner D. Scheduling nursing personnel according to nursing preference: a mathematical programming approach. Operations Research. 1976;24:842–856.
  134. Wasson JH, Godfrey MM, Nelson EC, Mohr JJ, Batalden PB. Microsystems in health care: Part 4. Planning patient-centered care. Joint Commission Journal on Quality and Safety. 2003;29(5):227–237. [PubMed: 12751303]
  135. Wears RL, Berg M. Computer technology and clinical work still waiting for Godot. Journal of the American Medical Association. 2005;293:1261–1263. [PubMed: 15755949]
  136. Weeks WB, Bagian JP. Developing a Culture of Safety in the Veterans Health Administration. Effective Clinical Practice. November/ December, 2000. Available online at: http://www.acponline.org/journals/ecp/novdec00/weeks.htm#authors. [PubMed: 11151523]
  137. Wei JS, Greer BT, Westermann F, Steinberg SM, Son CG, Chen QR, Whiteford CC, Bilke S, Krasnoselsky AL, Cenacchi N, Catchpoole D, Berthold F, Schwab M, Khan J. Prediction of clinical outcome using gene expression profiling and artificial neural networks for patients with neuroblastoma. Cancer Research. 2004;64(19):6883–6891. [PMC free article: PMC1298184] [PubMed: 15466177]
  138. Weiss EN, Cohen MA, Hershey JC. An interactive estimation and validation procedure for specification of semi-Markov models with application to hospital patient flow. Operation Research. 1982;30(6):1082–1104. [PubMed: 10259645]
  139. Winner RI, Pennell JP, Bertrand HE, Slusarczuk MMG. IDA Report R-338. Alexandria, Va: Institute for Defense Analysis; 1988. The Role of Concurrent Engineering in Weapons System Acquisition.
  140. Woods DD. Washington, D.C: American Psychological Association; 2000. Behind Human Error: Human Factors Research to Improve Patient Safety.
  141. Xiao Y, Gagliano D, LaMonte MP, Hu P, Gaasch W, Gunawadane R, Mackenzie CF. Design and evaluation of real-time mobile telemedicine system for ambulance transport. Journal of High Speed Networks. 2000;9:47–56.
  142. Xiao Y, Mackenzie CF, editors. Introduction to the special issue on video-based research in high risk settings: methodology and experience. Cognition, Technology and Work. 3. Vol. 6. 2004. pp. 127–130.



For the purpose of illustration, the description of quality function deployment has been simplified to two steps. For complicated sub-elements of the system or for a much larger system, the process would be expanded.

Copyright © 2005, National Academy of Sciences.
Bookshelf ID: NBK22835


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (4.4M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...