NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Mar. (Evidence Reports/Technology Assessments, No. 211.)
This publication is provided for historical reference only and the information may be out of date.

Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices.
Show detailsHow Important Is the Problem?
Adverse events (AEs) associated with medical treatments are a major source of morbidity and mortality.1-4 Studies showed that the incidence of AEs varied from 3 percent to 17 percent of hospitalized patients,5 and about 50 percent of the AEs were judged to be preventable. Most AEs resulted in minor or temporary disability, but a proportion of the AEs, 4 percent to 21 percent, contributed to death.5
In 1999, the Institute of Medicine (IOM) published a landmark report on medical errors titled “To Err Is Human: Building a Safer Health Care System.”6 Since IOM released the report, several studies have examined progress in patient safety and have found little evidence of systematic improvements in the health care system.1,7-10 According to a 2008 Healthcare Cost and Utilization Project statistical brief, drug-related adverse outcomes were noted in nearly 1.9 million inpatient hospital stays (4.7% of all stays) and 838,000 treat-and-release emergency department visits (0.8% of all visits).10 The Institute for Healthcare Improvement estimated that nearly 15 million instances of medical harm occur in the United States (U.S.) each year.11 Over the five years from 2004 to 2008, drug-related adverse outcomes in the inpatient setting increased by 52 percent.10 This increase in AEs could be the result of the intensified effort in incident reporting; still, keeping patients from being harmed by preventable medical errors will continue to be a challenging goal for the medical community.
As used in this review, an AE is defined as an event that results in unintended harm to the patient by an act of commission or omission rather than by the underlying disease or condition of the patient.12 A medical error is the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.12 AEs include medical errors as well as more general substandard care that can result in harm, such as harm caused by incorrect diagnoses or lack of patient monitoring during treatment.13 Therefore, AEs do not always involve errors, negligence, or poor quality of care and may not always be preventable.
“Near miss” is another term often used in patient safety monitoring. It is defined as an event or a situation that did not produce patient harm, but only because of intervening factors, such as patient health or timely intervention.12
What Is the Patient Safety Practice?
Health care organizations use a wide array of methods to uncover and monitor AEs and errors in medical care. These methods include incident reporting, direct observation of patient care, chart review, analysis of malpractice claims, patient complaints and reports to risk management, executive walk rounds, trigger-tool use, patient interviews, morbidity and mortality conferences, autopsy, and clinical surveillance.14-19 These methods vary in the timing of finding AEs (retrospective or “real-time”), and each has advantages and limitations.14-18
Historically, medical errors were revealed retrospectively through morbidity and mortality committees, autopsy, and malpractice claims data.6,14-16,18 While these methods provide valuable information on medical errors, they are not appropriate for measuring the incidence or prevalence of the errors or events. They might also be limited by hindsight bias (e.g., a tendency to rate care in the context of a bad outcome as substandard), if the evaluators are not blinded to outcome.15
Chart review was often used as the benchmark for estimating the extent of medical harms in hospitals or as the gold standard in patient safety studies to quantify AE rates.15,16 However, chart review is generally resource-intensive.15 Incomplete documentation in the medical record can affect the ability to detect the potential causes of AEs.15 Near misses that produce no injury are rarely detected by this method.15,16 The reliability (precision) of the judgments about the presence of AEs by chart reviewers could also be low.15
Incident reporting systems are a popular mechanism that the majority of hospitals rely on to uncover internal threats to patient safety.1-4,20,21 Since IOM endorsed using incident reporting systems in its landmark report on patient safety, 27 states and the District of Columbia have established hospital AE reporting systems.13,22 Reporting systems include surveys of providers and structured interviews and can provide rich information about medical errors that lead to AEs. Incident reporting systems can identify latent errors (“system problems”) not uncovered by some other methods; thus, they can be used to improve patient safety.13,15,16 In comparison with comprehensive chart reviews, incident reporting is also relatively inexpensive.15
However, like other methods for detecting safety problems, incident reporting has its own limitations. Incident reporting systems alone cannot reliably measure incidence and prevalence rates of errors and AEs.20,23 Providers may not report errors because of busy schedules, concerns about potential lawsuits, fear their reputations could be tarnished, or misperceptions about what constitutes patient harm.20 As a result, reported incidents may represent only a portion of serious incidents and may misguide detailed investigation efforts to less important targets.16,18,20,23,24 Additionally, the rates of incidents reported over time may not reflect real changes in safety in an institution, because an increased rate may simply indicate an improved commitment by the institution to identify medical errors rather than a true rise in medical hazards.23
In recent years, with the adoption of electronic medical records, computerized surveillance, including using electronic triggers, has become an increasingly popular method for identifying certain types of medical errors or AEs, particularly those related to use of medications.25-28 By integrating multiple data sources (e.g., electronic medical records, laboratory, pharmacy, billing), computerized surveillance may efficiently detect medical errors and AEs and could provide real-time information for preventing harm to patients from errors in medical treatment.25-28 Use of electronic medical record-based surveillance of diagnostic errors was also reported.29 However, the accuracy and reliability of the tools for computerized surveillance need further study.30 The initial cost of the systems remains another barrier to implementation.25
Similarly, other methods for detecting and monitoring patient safety problems (e.g., chart audits assisted with trigger tools, direct observation of patient care, executive walk rounds, administrative data analysis, data warehouses) also have their strengths and weaknesses. We have identified several documents that provide an overview of these methods based on systematic or targeted literature review.14-16 Table 1 summarizes the purposes of different patient safety problem detecting methods. Tables 2 and 3 summarize the strengths and weaknesses of these methods. Because the original documents used different taxonomies for the methods, we compiled the adapted tables together in this chapter to provide a more comprehensive overview. As a result, some contents in these tables may overlap.
Table 1, Chapter 36
Overview of the purposes of different methods for detecting patient safety problems.
Table 2, Chapter 36
Advantages and disadvantages of different methods used to measure errors and adverse events in health care (from the Thomas and Petersen study).
Table 3, Chapter 36
Advantages and disadvantages of different methods for hospitals to monitor for internal patient safety problems (from the Shojania study).
As Tables 2 and 3 have demonstrated, health care organizations have been using a wide array of methods to detect AEs and medical errors.14-19 Many of these methods (e.g., trigger tools) can be further categorized by the targeted problems (e.g., medication-related medical errors or iatrogenic infections), tools, algorithms, and data source used. Given the limited timeframe for this review, we focus this chapter on general approaches to detecting patient safety problems that involve using multiple methods (e.g., incident reporting, executive walk rounds, clinical surveillance, chart review, and trigger tools) to collect data.
We primarily reviewed studies that compared the utilities of different methods. However, comparison studies that used any method as a gold standard to validate another method were not included for this chapter, because, in essence, these studies still focused on one individual method (i.e., the method being validated). We believe that understanding the strengths and weaknesses of various methods is crucial for decisionmakers who need to form an effective strategy for monitoring patient safety problems that is appropriate for their organizations.
Readers who are seeking information on individual methods can refer to studies and reviews specifically focusing on those methods. As we reviewed the literature for this chapter, we identified a large number of publications focusing on individual methods, particularly in the areas of incident reporting, chart review, and trigger tools. Some systematic or targeted reviews provided insightful summaries about commonly used methods.13,18,22,31,32
Why Should This Patient Safety Practice Work?
Detection of AEs is a primary step to achieving a safe health care system. In the report, “Safe Practices for Better Healthcare—2010 Update,” the National Quality Forum stated that health care organizations must systematically identify and mitigate patient safety risks and hazards with an integrated approach to continuously drive down rates of preventable patient harm.33 As several landmark studies have suggested, medical errors are often a system failure where care practices are inconsistent among health care professionals.6,34,35 By systematically uncovering these errors and analyzing their causes, health care institutions can identify defects in processes of care and design system changes to prevent the errors.18,19,23
In the 1999 report, “To Err is Human: Building a Safer Health Care System,” the IOM also acknowledged the need to learn from medical errors and recommended establishing mandatory incident reporting systems as part of an approach to improving safety.6 The report noted that one of the causes of medical errors is lack of reliable data on the number of medical errors, which limits the ability to identify the problem's origins and develop initiatives to resolve the problem. A subsequent IOM report, “Crossing the Quality Chasm: A New Health System for the 21st Century,” reinforced the need for reliable data and noted a need for evidence-based policies and practices.36 By performing root-cause analyses (an in-depth examination of the data to identify factors in the care process that contribute to the errors) and implementing corrective action plans, health care organizations may be able to address system and process failures to ensure that potential errors are prevented in the future.5,7,22,23
What Are the Beneficial Effects of the Patient Safety Practice?
Measuring the beneficial effects of a safety-problem detection approach is not always straightforward. Few studies have measured changes in health outcomes that are brought about by implementing a safety-problem detection method. Designing rigorous studies to establish a direct connection between the method and any patient safety outcomes is challenging. The effectiveness, if any, of a safety-problem detection method may not always translate into better patient outcomes. These outcomes rely not only on how promptly and accurately the problems are identified but also on how the safety data are used in root-cause analyses and whether the corrective action plans are implemented effectively.5,6,22,23,36 If the safety data were misinterpreted or the action plans were not executed successfully, no improvement in safety outcomes would be observed regardless of the effectiveness of the detection method per se.6,22
Additionally, accurately estimating the true prevalence of safety problems is almost impossible, particularly with medical errors that did not cause any harm.23 While chart review has been used as the gold standard in some patient safety studies to quantify AE rates, it rarely detects medical errors that produce no harm and may also miss other safety problems because of incomplete documentation in the medical record.15,16 When an increased number of medical errors is identified, determining whether the finding reflects a deteriorating performance in risk management or is the result of improved efforts in uncovering these errors is difficult. Likewise, a decreased number of detected safety problems could be the result of effective risk management or simply reflect inadequate efforts to find the problems.
Because of these reasons, empirically measuring the impacts of safety-problem detection methods on patient outcomes is almost unlikely. The beneficial effects of these detection methods have often been judged partly on data and partly on assumptions. If data suggest that a method helped detect medical errors that had not been found via other means or detected more errors in a more timely fashion than other mechanisms, the method would be assumed beneficial to patient safety. While these assumptions appear reasonable, the data do not provide direct evidence that the detection method will lead to improved patient safety outcomes.
As discussed, we primarily reviewed studies that compared the utility of different methods. Our search identified one systematic review published by the World Health Organization (WHO) in December 2003.14 This study by Michel reviewed methods for assessing the nature and scale of harm caused by health systems. The objective of the study was to identify the strengths and weaknesses of available methods according to a defined set of criteria. These criteria included the following:14
- Effectiveness in capturing the extent of harm (in different environments).
- Availability of reliable data (judged by interobserver reliability).
- Suitability for large-scale or small, repeated studies. (Large-scale studies refer to national and regional studies. Small, repeated studies are carried out for a limited period at the hospital or local level.)
- Costs (financial, human resources, time, and burden on system).
- Effectiveness in influencing policy (focused on national, regional, or local policy or strategic programs).
- Effectiveness in influencing hospital and local safety procedures and outcomes.
- Synergy with other domains of quality of care.
This set of criteria was defined by the WHO Working Group in “Patient Safety: Rapid Assessment Methods for Assessing Hazards” in December 2002. The first four criteria focused on the intrinsic characteristics of the methods, their validity, reliability, and cost. The last three criteria were more related to the ability of the methods to trigger improvements in safety cultures and the quality of safety programs. The study reviewed 262 relevant studies. With the exception of comparative studies available for the assessment of effectiveness in capturing the extent of harm, the literature consisted mostly of descriptive studies. For the review, Michel considered the data reported in the included studies as well as the opinions of the authors of the studies.
The study rated each method on all seven criteria to produce a summary of its key strengths and limitations.14 When valid information was available, the author rated the criteria from 1 (least favorable) to 4 (most favorable). A study was defined as “valid” when an appropriate description of the method (sampling strategy, data collection, and data analysis) in line with “current standards” was available. The lowest level (1) indicates low effectiveness, suitability, or availability, or it means very high cost. Where the amount of evidence-based data was small, the author noted “to be confirmed.” The evidence-based ratings for each method in the seven areas are provided in Table 4. In the absence of valid data, the author used a subjective rating scale from 1 (least favorable) to 4 (most favorable), based on the opinions of the studies being reviewed. These opinion-based ratings are provided in Table 5. Both Table 4 and Table 5 were based on literature from developed countries. The author also reviewed literature from developing countries, but that information is not discussed in this chapter.
Table 4, Chapter 36
Evidence-based rating of the main methods used in developed countries for estimating hazards in health care systems.
Table 5, Chapter 36
Subjective rating, where there was no evidence-based data, of the main methods used in developed countries for estimating hazards in health care systems.
The WHO study revealed that the methods for assessing the nature and scale of harm caused by health systems have different purposes (Table 1), strengths, and limitations (Table 4 and Table 5).14 The main conclusion of the study was that these methods do not compete with each other. Instead, they complement each other by providing different levels of qualitative and quantitative information. The list of methods and the illustrative ratings (Table 4 and Table 5) provided by the study may serve as a starting point for choosing appropriate methods for detecting harms caused by health organizations. The author suggested that identification of appropriate methods must take into account the distinct environmental factors faced by each health care organization or region.
This WHO study also had limitations.14 First, the studies included for review varied in quality and quantity. For some methods of interest, such as interviews with health care providers, analysis of administrative data, or confidential inquiries, few studies were available for review. Some other methods that the author thought might be useful for detecting safety problems, such as single case analysis and focus group discussions, were not covered by the study. Second, the rating systems and criteria used in the study for judging the strengths or weaknesses of the methods were not adequately validated. The assessment of the methods was generally subjective rather than objective. Third, because of the wide variety of studies reviewed, the author was not able to use explicit criteria for quality assessment of the studies.
In addition to the WHO review, we also identified several primary studies that compared the utility of various methods for monitoring AEs or medical errors. In 2007, Olsen and colleagues compared the use of incident reporting, pharmacist surveillance, and local real-time record review for the recognition of clinical risks associated with hospital inpatient care.37 Using the three methods, they prospectively collected data on AEs on 288 patients discharged from an 850-bed general hospital in the National Health System in the UK. The study found little overlap in the nature of events detected by the three methods. Record review detected 26 AEs and 40 potential AEs (PAEs) occurring during the index admission. Incident reporting detected 11 PAEs and no AEs. Pharmacy surveillance found 10 medication errors, all of which were PAEs. The study concluded that incident reporting does not provide an adequate assessment of clinical AEs and that a variety of methods need to be used to provide a full picture of the safety condition in a health care organization.
In 2008, Wetzels and colleagues compared the validity and usefulness of five methods for identifying AEs in general practice.38 The five methods included physician reported AEs, pharmacist reported AEs, patients' experiences of AEs, assessment of a random sample of medical records, and assessment of all patients who died. In this prospective observational study, a total of 68 events were identified using these methods. The patient survey identified the highest number of events and the pharmacist reports identified the fewest. No overlap among the methods was detected. The authors concluded that a mix of methods is needed to identify AEs in general practice.
A study by Ferranti and colleagues compared results from two adverse drug event (ADE) detection methods—voluntary reporting and computerized surveillance—at a large academic medical center.39 This 2008 study analyzed the medications most likely to cause harm and evaluated the strengths and weaknesses of each detection system. During a 7-month period, computerized surveillance detected 710 ADEs (6.93/1,000 patient days), whereas voluntary reporting identified 205 ADEs (1.96/1,000 patient days). For each major drug category (anticoagulants, hypoglycemia, narcotics and benzodiazepines, and miscellaneous), the two methods detected significantly different event rates.39 Most surveillance-identified events were hypoglycemia-related, whereas most voluntarily-reported events were in the miscellaneous category. Of all unique ADEs (875), only 40 were common between the systems. The study's findings underscored the synergistic nature of the two ADE detection approaches. Although surveillance provides quantitative data to estimate the actual rate of ADEs, voluntary reporting contributes qualitative evidence to prompt future surveillance rule development and identify areas of emerging risk. The authors concluded that the two detection methods should be used together to provide a full picture of ADE-related patient safety problems.
In 2010, Levtzion-Korach and colleagues published a study that examined and compared five AE detection methods in one hospital.40 The methods included a Web-based voluntary incident reporting system, medical malpractice claims, patient complaints, the hospital risk management database, and executive walk rounds. These methods varied in the timing of the reporting (retrospective or prospective), severity of the events, and profession of the reporters. The five disparate data sources at the hospital captured about 15,000 problems. The authors systematically classified the detected problems into 23 categories using a taxonomy that they developed. The study found that each method identified important safety problems that were generally not captured by any of the other methods.40 The following are the common categories of safety problems detected using the five methods compared in the study:
- Spontaneous reporting: patient identification issues, falls, and medication problems
- Malpractice claims: issues with clinical judgment related to diagnosis and treatment, communication, and technical skills and problems with medical records (incomplete, illegible, or missing)
- Patient complaints: issues with communication, ancillary services (e.g., patient transport, kitchen, housekeeping), and administration (admission and discharge processes, scheduling)
- Risk management: issues with technical skills, patient and family behavior (compliance issues, unusual behavior by a patient or family members), administration, and clinical judgment
- Executive walk rounds: problems with equipment, electronic medical records and other such technologies, and infrastructure (work environment, security)
Communication problems were common among patient complaints and malpractice claims. Clinical judgment problems were the leading category for malpractice claims. Walk rounds identified issues with equipment and supplies. AE reporting systems highlighted identification issues, especially mislabeled specimens. The authors concluded that, to obtain a comprehensive picture of their patient safety problems and to develop priorities for improving safety, hospitals should use a broad portfolio of approaches and then synthesize the messages from all individual approaches into a collated and cohesive whole.
In another 2010 study, the Office of Inspector General of the Department of Health and Human Services compared the usefulness of five safety event screening methods: nurse reviews, analysis of present-on-admission (POA) indicators, Medicare beneficiary interviews, hospital incident reports, and analysis of patient safety indicators.41 The study used a sample of 278 Medicare beneficiary hospitalizations selected from all Medicare discharges from acute care hospitals in two selected counties during a 1-week period in August 2008. The investigators compared events flagged by each screening method to the 120 events identified and/or confirmed through physician reviews. The study found that nurse reviews and POA analysis identified the greatest number of safety events. Nurse reviews identified 93 of the 120 confirmed safety events and POA analysis identified 61 events. Beneficiary interviews identified 22 events, and the remaining two screening methods identified 8 events each. Of the 120 events, 55 (46%) were identified by only one screening method. Nurse reviews identified 35 events (29% of the 120 events) not flagged by any other screening method. POA analysis alone flagged 14 events (12% of the 120 events).
We also reviewed a study by Tinoco and colleagues that compared a computerized surveillance system (CSS) with manual chart review (MCR) for detecting inpatient ADEs and hospital-associated infections (HAIs).25 The authors retrospectively analyzed the events detected using the two methods by type of events. From a sample of 2,137 patient admissions between October 2000 and December 2001, the authors identified AEs that were detected only by MCR, only by CSS, or by both methods. The study found that CSS detected more HAIs than MCR (92% vs. 34%); however, a similar number of ADEs was detected by both systems (52% vs. 51%). The agreement between systems was 26 percent and 3 percent for HAIs and ADEs respectively. The study also found that MCR detected events missed by CSS using information in physician narratives and that some events found by MCR were missed by CSS. The authors concluded that integrating information from physician narratives with CSS using natural language processing would improve the detection of ADEs more than HAIs.
A compelling theme emerged among the findings of the studies reviewed for this section. That is, different methods for detecting patient safety problems overlap very little in the safety problems they detect. These methods complement each other and should be used in combination to provide a comprehensive safety picture of the health care organization. Detailed information on the studies reviewed in this section (except for the WHO report14) is provided in Appendix D. Because the body of evidence consists of studies of different designs, the overall strength of evidence is not assessed.
What Are the Harms of the Patient Safety Practice?
None of the studies that we reviewed reported any harm directly caused by the implementation of a method for monitoring patient safety problems. However, in theory, a method that often fails to capture important AEs or medical errors may mislead the health care organization about its true safety status and cause a delay in addressing safety problems, likely leading to patient harms. Additionally, various detection methods may compete with each other for the limited resources available for risk management in an organization. Adopting a relatively ineffective method might shift resources from more effective alternatives and, thus, decrease the organization's overall performance in uncovering safety problems. This loss of detection capability, in turn, could lead to an increase in harms to the patients treated in the organization. However, designing rigorous studies to empirically test these hypotheses is difficult.
How Has the Patient Safety Practice Been Implemented, and in What Contexts?
As previously described, a wide variety of methods exist for detecting patient safety problems, and this chapter focuses on evidence only from studies that compared these methods. The methods being compared were implemented differently due to their differences in the primary problems targeted, tools used, resources required, staff involved, and the timing (retrospective or “real-time”) of the detection (see Appendix D, the “Description of PSP” column).14-19
Some of these methods (e.g., incident reporting and trigger tools) can be further categorized (e.g., mandatory or voluntary incident reporting systems), and each method in the subcategories can also be implemented differently. For example, at least 27 states and some Government agencies (e.g., the U.S. Food and Drug Administration) in the U.S. have established some form of incident reporting systems. These various incident reporting systems or programs could be implemented differently in terms of data collected, tools used, reporting process, and how data are shared or used.1,2,13,18,21,22
Similarly, many different types of trigger tools and automated systems exist.31,32 These tools or systems target different problems (e.g., general AEs, ADEs, nosocomial infection, decubitus ulcers, surgical complications) and may involve different data sources, equipment, software, or algorithms. It is not feasible for this chapter to cover the implementation issues for all these methods. Therefore, we describe only the relevant information reported in the comparison studies reviewed for this chapter. This information is provided in Appendix D (refer to the “Description of PSP” and the “Context” columns).
Are There Any Data About Costs?
Accurately estimating the cost associated with implementing strategies for detecting patient safety problems is difficult. The direct cost for this activity may include expenditures for equipment and materials (e.g., computers, software, photocopy machines, paper), facilities and space, and labor for collecting and analyzing data. Indirect, overhead expenses also may need to be counted. These direct and indirect costs vary across health care organizations and regions and constantly change over time.
Our search identified sporadic data about costs for implementing safety problem detection methods. The most recent and relevant data came from the study by Levtzion-Korach.40 This study estimated the direct cost of the five methods used in one hospital (Table 6). It showed that the hospital's expenditures on these systems were estimated to be a one-time cost of $120,000 and an annual cost of almost $1 million. Additionally, we identified some general discussions about which detection methods are generally more expensive or labor-intensive (Table 2 and Table 3). Our search did not identify any full economic evaluation (e.g., cost-effectiveness analysis from the public's perspective) of the burden related to the implementation of various methods for detecting AEs or medical errors.
Table 6, Chapter 36
Estimated costs of systems for detecting patient safety problems in one hospital.
Are There Any Data About the Effect of Context on Effectiveness?
For this chapter, we focus on the evidence only from studies that compared different methods for detecting patient safety problems. It is not feasible for the chapter to review the effect of context on effectiveness for each individual method. We collected data only on the context for the methods being compared in the included studies. These data fall into five categories: the external context, organizational characteristics, teamwork, leadership, and culture (see Appendix D, the “Context” column). However, based on the data collected, no conclusion can be drawn regarding the effect of context on the effectiveness of the detection methods, mainly because these studies were not designed to assess such links.
Nevertheless, the importance of strong leadership, teamwork, and organization-wide safety culture to successful implementation of patient safety practices as a whole has been well documented in literature that is beyond the scope of this chapter.3,6,22,23 It is reasonable to expect that leadership, teamwork, and safety culture have the same impact on the implementation of patient safety monitoring strategies. Additionally, the external factors (e.g., how governments or the Joint Commissions use the safety data reported by hospitals) should also have a significant impact on the effectiveness of the strategies.13,18,21
Conclusions and Comment
The studies reviewed for this chapter consistently suggested that each method for detecting AEs or medical errors has advantages and disadvantages. These various methods do not compete with each other. They identify fairly distinct problems and complement each other by providing different levels of qualitative and quantitative information about patient safety.
Health care organizations are generally faced with a variety of safety problems, such as misdiagnoses, misidentified patients, falls, procedural complications, and medication-related errors. All these problems need to be identified adequately so that hospitals can effectively prioritize the problems on the basis of the burden of harm and costs associated with the problems, the availability of effective prevention strategies, and the likelihood of local success in implementing such strategies.3,6,22,23 Therefore, health care organizations should use a broad portfolio of methods to uncover safety problems and then synthesize the data collected into a comprehensive picture.40
For administrators and risk management professionals, a primary challenge is how to make a rational choice among a large number of methods to build a portfolio appropriate for their organizations.40 While no simple formula exists to guide the decisionmaking process, the composition of the portfolio generally depends on the safety problems most relevant to the organization and the resources available for the risk management effort.16 The bottom line is that the choice of a specific method by a health care organization might not be as important as the decision to use more than one method.16 The information that we compiled in this chapter is intended to serve as a starting point for health care organizations to reconsider their general approach to monitoring patient safety problems. Future research needs to assess the effectiveness of different portfolios of methods and provide practical guidance on how to combine the information collected using different methods into one safety picture. A summary table is located below (Table 7).
Table 7, Chapter 36
Summary table.
References
- 1.
- Indiana medical error reporting system: report for 2009. Indianapolis (IN): Indiana State Department of Health; Aug 30, 2010. p. 56. www
.in.gov/isdh/files/2009_MERS_Report .pdf. - 2.
- Utah Department of Health; HealthInsight; UHA, Utah Hospitals & Health Systems Association. 2009 Utah Sentinel events data report: identifying opportunities for improvement. Utah: Utah Department of Health;; Mar, 2010. p. 9. http://health
.utah.gov /psi/pubs/sentinel_events09.pdf. - 3.
- Adverse health care events reporting system: what have we learned? St. Paul (MN): Minnesota Department of Health; Jan, 2009. 32 p.
- 4.
- Thompson WC Jr. The high costs of weak compliance with the New York State hospital adverse event reporting and tracking system: policy report. New York (NY): New York State Department of Health; Mar 9, 2009. p. 54. www
.comptroller.nyc.gov. - 5.
- Zegers M, de Bruijne MC, de Keizer B, et al. The incidence, root-causes, and outcomes of adverse events in surgical units: implication for potential prevention strategies. Patient Saf Surg. 2011. p. 13. www
.ncbi.nlm.nih.gov /pmc/articles/PMC3127749 /pdf/1754-9493-5-13.pdf. [PMC free article: PMC3127749] [PubMed: 21599915] - 6.
- Institute of Medicine Committee on Quality of Health Care in AmericaKohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington (DC): National Academy Press; Nov 1, 1999. 223 p. http://stills
.nap.edu /books/0309068371/html. [PubMed: 25077248] - 7.
- Wachter RM. Patient safety at ten: unmistakable progress troubling gaps. Health Aff (Millwood). 2010 Jan-Feb;29(1):165–73. http://content
.healthaffairs .org/content/29/1/165.full.pdf. [PubMed: 19952010] - 8.
- Leape LL, Berwick DM. Five years after To Err Is Human: what have we learned. JAMA. 2005 May 18;293(19):2384–90. [PubMed: 15900009]
- 9.
- Altman DE, Clancy C, Blendon RJ. Improving patient safety--five years after the IOM report. N Engl J Med. 2004 Dec 11;351(20):2041–3. [PubMed: 15537902]
- 10.
- Lucado J, Paez K, Elixhauser A. Medication-related adverse outcomes in U.S. hospitals and emergency departments, 2008: statistical brief #10 [internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); Apr, 2011. [2011 Sep 13]. [PubMed: 21595139]
- 11.
- Institute for Healthcare Improvement. Protecting 5 million lives from harm. [internet]. Cambridge (MA): Institute for Healthcare Improvement; [2011 Nov 10].
- 12.
- National Quality Forum (NQF). NQF patient safety terms and definitions. Washington (DC): National Quality Forum (NQF); Dec, 2009. 6 p. www
.qualityforum.org /Topics/Safety_Definitions.aspx. - 13.
- Office of Inspector General. Adverse events in hospitals: state reporting systems. Washington (DC): Department of Health and Human Services; Dec, 2008. p. 37. http://oig
.hhs.gov/oei /reports/oei-06-07-00471.pdf. - 14.
- Michel P. Strengths and weaknesses of available methods for assessing the nature and scale of harm caused by the health system: literature review. Geneva: World Health Organization (WHO); 2003. 59 p. www
.who.int/patientsafety /research/P_Michel _Report_Final_version.pdf. - 15.
- Thomas EJ, Petersen LA. Measuring errors and adverse events in health care. J Gen Intern Med. 2003 Jan;18(1):61–7. [PMC free article: PMC1494808] [PubMed: 12534766]
- 16.
- Shojania KG. The elephant of patient safety: what you see depends on how you look. Jt Comm J Qual Patient Saf. 2010 Sep;36(9):399–401. [PubMed: 20873672]
- 17.
- Michel P, Quenon JL, de Sarasqueta AM, et al. Comparison of three methods for estimating rates of adverse events and rates of preventable adverse events in acute care hospitals. BMJ. 2004 Jan 24;328(7433):199. [PMC free article: PMC318484] [PubMed: 14739187]
- 18.
- World Alliance for Patient Safety. WHO draft guidelines for adverse event reporting and learning systems. Geneva (Switzerland): World Health Organization; 2005. 80 p. www
.who.int/patientsafety /events/05/Reporting_Guidelines.pdf. - 19.
- Leape LL. Reporting of adverse events. N Engl J Med. 2002 Nov 14;347(20):1633–8. [PubMed: 12432059]
- 20.
- Levinson DR. Hospital incident reporting systems do not capture most patient harm [OEI-06-09-00091]. Washington (DC): Department of Health and Human Services, Office of Inspector General; Jan, 2012. 42 p.
- 21.
- United States Government Accountability Office. Health-care-associated infections in hospitals: an overview of state reporting programs and individual hospital initiatives to reduce certain infections. Washington (DC): United States Government Accountability Office; Sep, 2008. p. 55. www
.gao.gov/products/GAO-08-808. - 22.
- The power of safety: state reporting provides lessons in reducing harm, improving care. [internet]. Rockville (MD): Agency for Healthcare Research and Quality (AHRQ); 2010. [2011 Jul 27]. [2 p]. http://psnet
.ahrq.gov/resource .aspx?resourceID=18516. - 23.
- Shojania KG, Duncan BW, McDonald KM, et al. Making health care safer: a critical analysis of patient safety practices. 43. AHRQ; 2001. pp. i–x.pp. 1–668. [PMC free article: PMC4781305] [PubMed: 11510252]
- 24.
- Brunicardi FC, Heggeness MH, Andersen DK, et al. Schwartz's principles of surgery [database online]. 9th. New York (NY): McGraw-Hill Companies, Inc.; 2010. [2011 May 26]. Orthopedic surgery. [2 p]. www
.accessmedicine.com/content .aspx?aID==5028211. - 25.
- Tinoco A, Evans RS, Staes CJ, et al. Comparison of computerized surveillance and manual chart review for adverse events. J Am Med Inform Assoc. 2011 Jul-Aug;18(4):491–7. [PMC free article: PMC3128408] [PubMed: 21672911]
- 26.
- Kilbridge PM, Noirot LA, Reichley RM, et al. Computerized surveillance for adverse drug events in a pediatric hospital. J Am Med Inform Assoc. 2009 Sep-Oct;16(5):607–12. [PMC free article: PMC2744710] [PubMed: 19567791]
- 27.
- Ferranti J, Horvath MM, Cozart H, et al. Reevaluating the safety profile of pediatrics: a comparison of computerized adverse drug event surveillance and voluntary reporting in the pediatric environment. Pediatrics. 2008 May;121(5):e1201–7. [PubMed: 18450863]
- 28.
- Rommers MK, Teepe-Twiss IM, Guchelaar HJ. A computerized adverse drug event alerting system using clinical rules: a retrospective and prospective comparison with conventional medication surveillance in the Netherlands. Drug Saf. 2011 Mar 1;34(3):233–42. [PubMed: 21332247]
- 29.
- Singh H, Giardina TD, Forjuoh SN, et al. Electronic health record-based surveillance of diagnostic errors in primary care. BMJ Qual Saf. 2012 Feb;21(2):93–100. http:
//qualitysafety .bmj.com/content/21/2/93.full.pdf. [PMC free article: PMC3680372] [PubMed: 21997348] - 30.
- Murff HJ, FitzHenry F, Matheny ME, et al. Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA. 2011 Aug 24;306(8):848–55. [PubMed: 21862746]
- 31.
- Agency for Healthcare Research and Quality. Triggers and targeted injury detection systems (TIDS) expert panel meeting: conference summary. Rockville (MD): Agency for Healthcare Research and Quality, U. S. Department of Health and Human Services; Jun 30, Jul 30, 2008. pp. 1–57. [AHRQ Publication No. 090003] 2009 Feb. www
.ahrq.gov/qual/triggers/ - 32.
- Govindan M, Van Citters AD, Nelson EC, et al. Automated detection of harm in healthcare with information technology: a systematic review. Qual Saf Health Care. 2010 Oct;19(5):e11. [PubMed: 20671081]
- 33.
- National Quality Forum. Safe practices for better healthcare 2010 update, a consensus report. [internet]. Washington (DC): National Quality Forum; Apr, 2010. [2011 Nov 10]. [54 p]. www
.qualityforum.org /Publications/2010/04 /Safe_Practices_for_Better_Healthcare_ %e2 %80%93_2010_Update.aspx. - 34.
- Reason J. Understanding adverse events: human factors. Qual Health Care. 1995 Jun;4(2):80–9. [PMC free article: PMC1055294] [PubMed: 10151618]
- 35.
- Vincent C. Understanding and responding to adverse events. N Engl J Med. 2003 Mar 13;348(11):1051–6. [PubMed: 12637617]
- 36.
- Institute of Medicine, Committee on Quality of Health Care in AmericaCorrigan JM, et al., editors. Crossing the quality chasm: a new health system for the 21st century. Washington (DC): National Academy Press; 2001. 337 p. http://search
.nap.edu /books/0309072808/html/ [PubMed: 25057539] - 37.
- Olsen S, Neale G, Schwab K, et al. Hospital staff should use more than one method to detect adverse events and potential adverse events: incident reporting, pharmacist surveillance and local real-time record review may all have a place. Qual Saf Health Care. Feb, 2007. pp. 40–4. www
.ncbi.nlm.nih.gov /pmc/articles/PMC2464933/pdf/40.pdf. [PMC free article: PMC2464933] [PubMed: 17301203] - 38.
- Wetzels R, Wolters R, van Weel C, et al. Mix of methods is needed to identify adverse events in general practice: a prospective observational study. BMC Fam Pract. 2008;9:35. [PMC free article: PMC2440745] [PubMed: 18554418]
- 39.
- Ferranti J, Horvath MM, Cozart H, et al. A multifaceted approach to safety: the synergistic detection of adverse drug events in adult inpatients. J Patient Saf. 2008;4:184–90. http://analytics
.dhts .duke.edu/wysiwyg/downloads /Ferranti_JPS_adults.pdf. - 40.
- Van Der Linden W, Warg A, Nordin P. National register study of operating time and outcome in hernia repair. Arch Surg. 2011 Oct;146(10):1198–203. [PubMed: 22006880]
- 41.
- Levinson DR. Adverse events in hospitals: methods for indentifying events [OEI-06-08-00221]. Washington (DC): Department of Health and Human Services, Office of Inspector General; Mar, 2010. 60 p.
- How Important Is the Problem?
- What Is the Patient Safety Practice?
- Why Should This Patient Safety Practice Work?
- What Are the Beneficial Effects of the Patient Safety Practice?
- What Are the Harms of the Patient Safety Practice?
- How Has the Patient Safety Practice Been Implemented, and in What Contexts?
- Are There Any Data About Costs?
- Are There Any Data About the Effect of Context on Effectiveness?
- Conclusions and Comment
- References
- Monitoring Patient Safety Problems (NEW) - Making Health Care Safer IIMonitoring Patient Safety Problems (NEW) - Making Health Care Safer II
Your browsing activity is empty.
Activity recording is turned off.
See more...