Chapter 2Methods

Topic Development

This topic was nominated by leaders of the Agency for Healthcare Research and Quality's Patient Safety Portfolio, part of the Center for Quality Improvement and Patient Safetity.

The original goals of the project were stated as follows:

The analysis shall build on and expand upon earlier evidence reports and current listing of Safe Practices by the National Quality Forum's (NQF) ‘Safe Practices for Better Healthcare 2010 Update.’ The analysis shall focus on the collection of evidence of the effectiveness of new safe practices that have been developed but not included in the 2010 update, evidence of implementation of current and new safe practices and the adoption of safe practices by health care providers. This analysis shall include the review of scientific literature, other appropriate analyses, and extensive peer review of the draft report. The final report of this project will be used by AHRQ for strategic planning in its patient safety portfolio for future project development, implementation of safe practices. The report will also be used by external organizations such and the NQF, Joint Commission and others in their patient safety efforts.1

The preliminary Key Questions, pending topic refinement, were organized into three categories.

Design, Development and Testing of New Patient Safety Practices

  • What new patient safety practices (PSPs) have been developed since 2001 and/or are not included in the NQF safe Practice list in 2010?
  • What is the nature of the safety practice i.e. clinical, organizational, or behavioral?
  • What is the intended risk that the practice is designed to prevent or mitigate?
  • Describe how the practice is a bundle of individual components or practices, if applicable.
  • What is the intended setting for the practice, i.e., in patient, ambulatory, combination, specialty, or clinical domain, and organizational setting?
  • What are the nature, quality, and weight of evidence of the practice's effectiveness?

Implementation of Patient Safety Practice

  • Was the safety practice implemented outside the developing institution?
  • What were the contextual settings in which it was implemented?
  • What were the issues, barriers, problems, successes, and failures in the implementation of the practice?
  • What modifications and/or customizations were made (if any) in the implementation process?
  • What are the different implementation settings outside the developing institution that have been reported for this practice?
  • Describe how the practice has been sustained in its use after initial implementation.
  • Was there any external support for the implementation process, e.g., AHRQ technical support, use by a collaborative, or quality improvement organization (QIO)?

Adoption/Diffusion

  • What is the extent to which the practice has been adopted by multiple institutions or organizations outside the developing institution?
  • Was there any organized activity or program to support the diffusion of this innovation or practice?
  • What, if any, evidence exists on the sustained use of the practice?
  • Has the practice become a requirement for use by any accreditation or credentialing agency or organization?

Project Overview

An overview of the project is depicted in Figure 1. A key aspect of this project is the active participation of a Technical Expert Panel (TEP) comprising a large number of patient safety stakeholders and evaluation methods experts. We retained the participation of the TEP that had participated in a prior AHRQ-supported project, “Assessing the Evidence for Context-Sensitive Effectiveness and Safety of Patient Safety Practices: Developing Criteria” (hereafter referred to as “Context Sensitivity”). The TEP comprised many of the key patient safety leaders in the United States., Canada, and the United Kingdom, including experts in specific PSPs, as well as experts in evaluation methods and people charged with implementing PSPs in hospitals and clinics.

Figure 1 is a diagram illustrating the overview of the project from topic refinement to the review and interpretation of the evidence. The figure begins with the topic refinement, for which an initial list of potential PSPs was created from the following sources: Making Health Care Safer 2001 report, the Joint Commission, the National Quality Forum's 2010 Update, and miscellaneous sources such as the Leapfrog group; practices identified in an initial scoping search; and those suggested by our Technical Expert Panel (TEP). This effort resulted in an initial list of 158 potential PSPs, which are outlined in Appendix A. The project team then reviewed this list, combining some topics and renaming others, which resulted in 97 PSPs. The internal project team then divided that list into 35 “includes,” 49 “unsures,” and 13 “excludes.” The next step was to solicit TEP input about the teams' decisions, in particular to obtain formal votes on the PSPs that were classified as “unsure” (to accommodate all TEP members, two meetings were held: one on March 24, 2011 and a second on March 31, 2011). This effort resulted in 48 PSPs judged to be of highest priority in terms of the need for an evidence review of effectiveness, implementation, or adoption. Given that an evidence review of this number of topics could not be conducted within the time frame of the project, the project leads consulted the TEP on April 8, 2011 for an assessment of the topics that most merited in-depth reviews. Based on this input, the research team divided the included topics into those that would receive in-depth reviews and those that would receive brief reviews. The topics were then distributed among the four collaborating EPCs: ECRI, Johns Hopkins, RAND, and UCSF/Stanford. The resulting literature reviews were combined to form the draft report. The final step of the process was a face-to-face meeting at the AHRQ office from January 10-11th 2012 to review and interpret the evidence.

Figure 1, Chapter 2

Overview of the project.

We divided the project into three phases: topic refinement, the evidence review, and the critical review and interpretation of the evidence. The project team conducted the topic refinement and the critical review and interpretation of the evidence jointly with the TEP; the project team performed the evidence review.

Topic Refinement

Because the goals of the project were to assess the “evidence of the effectiveness of new safe practices” and the “evidence of implementation of current…safe practices,” practically all PSPs were potentially eligible for inclusion in this review. Thus our first task was to refine the scope of the topic to something that was achievable within the timeframe and budget for the project; this task was undertaken by the project team and the TEP. Figure 1 presents an overview of how this task was accomplished. We first compiled a list of potential PSPs for the review, starting with the 79 topics in the MHCS report (2001)2 and adding practices from the National Quality Forum's 2010 Update, the Joint Commission, and the Leapfrog Group; practices identified in an initial scoping search; and those suggested by our TEP. This effort resulted in an initial list of 158 potential PSPs (see Appendix A).

We then conducted an internal project team process that included amalgamation of some topics and renaming of others, resulting in 96 PSPs. Internal project team triage resulted in our identifying 35 PSPs we believed must be included, 48 PSPs about which we were unsure, and 13 that we believed could be excluded or folded into other PSPs that were on our “include” list (Table 1). As indicated, we incorporated some of those 13 topics into other topics, such as the monitoring topics. We excluded others that we judged to represent more of a quality issue than a patient safety issue (such as pneumococcal vaccination interventions and regionalizing surgery to high volume centers), whereas we judged others to be too late (warfarin interventions, in light of the emergence of new oral anticoagulants) or too early in development (radio-frequency identification [RFID] devices attached to wandering patients) for consideration.

Table 1, Chapter 2. Initially excluded topics.

Table 1, Chapter 2

Initially excluded topics.

We then sought input from our TEP about these decisions, offering them the opportunity to change any of the “include/exclude” decisions, and asked for formal votes on the 49 PSPs classified an “Unsure.”

This effort resulted in 48 PSPs judged to be of highest priority in terms of the need for an evidence review of effectiveness, implementation, or adoption, still too large a number of topics to review comprehensively within the given timeframe. Therefore, we asked our TEP to assess whether “breadth” or “depth” was likely to be more valuable for stakeholders—in other words, we asked whether the review should focus on fewer topics in more detail or cover all topics but in less detail. Our TEP recommended a hybrid approach in which some topics would be reviewed in depth, whereas other topics would receive only a “brief review.” Topics could be considered to need only a “brief review” for several reasons: the PSP is already well-established; stakeholders need to know only “what's new?” since the last time this topic was reviewed in depth; new evidence suggests the PSP may not be as effective as originally believed, so it is no longer a priority safety practice to implement; or it is an emerging PSP with limited evidence yet accumulated about it.

For each of the 48 topics, we then solicited formal input from our TEP about the need for an in-depth review, a brief review, or no review at all. Table 2 presents the results in terms of the proportion of TEP members who recommended a topic undergo an in-depth review, a brief review, or no review at all. We designated all topics that received 50 percent or greater support for an in-depth review to be reviewed in depth; all other topics were designated for brief reviews. No topic on the list received 50 percent or greater support for no review at all. The list underwent further modification, as some PSPs originally designated as separate topics were judged to be sufficiently similar to be covered together in one review; examples included the topics related to transitions in care and those related to monitoring.

Table 2, Chapter 2. Proportion of technical expert panelists expressing a preference for the level of evidence review for each PSP.

Table 2, Chapter 2

Proportion of technical expert panelists expressing a preference for the level of evidence review for each PSP.

A final set of modifications to this scope occurred during the course of the reviews.

  • Our PSP topic on pressure ulcers was modified to focus solely on implementation, as an EPC review of the effectiveness of pressure ulcer prevention interventions is currently underway.
  • We combined the topics, “diagnostic errors” and “notification of test results to patients” into a single in-depth review.
  • The body of literature on simulation methods was sufficiently large that we treated it as an in-depth review.

The review topics were then divided among the participating EPCs. Weekly teleconference calls and email were used to promote common practices in the review process.

Evidence Assessment Framework

The framework for our consideration of the evidence regarding a PSP was worked out as part of the prior AHRQ “Context Sensitivity” project.3 A principal challenge in previous reviews of PSPs has been addressing the question of what constitutes evidence for PSPs. Many practices designed to improve quality and safety are complex sociotechnical interventions whose targets may be entire health care organizations or groups of providers, and these interventions may be targeted at extremely rare events. To address the challenge regarding what constitutes evidence, we recognize that PSPs must be evaluated along two dimensions: (1) the evidence regarding the outcomes of the safety practices, and (2) the contextual factors that influence the practices' use and effectiveness.

Figure 2 presents this framework, depicting a generic PSP that consists of a bundle of components (the individual boxes) and the context within which the PSP is embedded. Important evaluation questions, as depicted on the right, concern effectiveness and harms, implementation, and adoption and spread. We then apply criteria to evaluate each of four factors that together constitute equality (depicted as puzzle pieces in the bottom half of the figure):

Figure 2 is an analytic framework that shows the evidence assessment of patient safety practices. The figure is described in detail above as follows: “The figure depicts a generic PSP that consists of a bundle of components (the individual boxes) and the context within with the PSP is embedded. Important evaluation questions, as depicted on the right hand side, concern effectiveness and harms, implementation, and adoption and spread. Criteria are then used to evaluate each of four factors comprising quality (depicted as puzzle pieces in the bottom half of the figure): 1) constructs about the PSP, its components, context factors, outcomes, as well as ways to measure accurately these constructs; 2) logic model or conceptual framework about the expected relationships among these constructs; 3) internal validity to assess the PSP results in a particular setting; and 4) external validity to assess the likelihood of being able to garner the same results in another setting. This information is then synthesized into the whole, meaning an evaluation of the quality (or strength) of the evidence about a particular PSP.”

Figure 2, Chapter 2

Framework for evidence assessment of patient safety practices.

  1. Constructs about the PSP, its components, context factors, outcomes, and ways to measure these constructs accurately;
  2. Logic model or conceptual framework about the expected relationships among these constructs;
  3. Internal validity to assess the PSP results in a particular setting; and
  4. External validity to assess the likelihood of being able to garner the same results in another setting.

We then synthesize this information into an evaluation of the strength of the evidence about a particular PSP.

The principal results of the “Context-Sensitivity” project included the following key points.

  • Whereas controlled trials of PSP implementations offer investigators greater control of sources of systematic error than do observational studies, trials often are not feasible, in terms of time or resources. Also, controlled trials are often not possible for PSPs requiring large-scale organizational change or PSPs targeted at very rare events. Furthermore, the standardization imposed by the clinical trial paradigm may stifle the adaptive responses necessary for some quality improvement or patient safety projects. Hence, researchers need to use designs other than RCTs to develop strong evidence about the effectiveness of PSPs.
  • Regardless of the study design chosen for an evaluation, components that are critical for evaluating a PSP in terms of how it worked in the study site and whether it might work in other sites include the following:
    • Explicit description of the theory for the chosen intervention components, and/or an explicit logic model for “why this PSP should work;”
    • Description of the PSP in sufficient detail that it can be replicated, including the expected change in staff roles;
    • Measurement of contexts;
    • Explanation, in detail, of the implementation process, the actual effects on staff roles, and changes over time in the implementation or the intervention;
    • Assessment of the impact of the PSP on outcomes and possible unexpected effects (including data on costs, when available); and
    • For studies with multiple intervention sites, assessment of the influence of context on intervention and implementation effectiveness (processes and clinical outcomes).
  • High-priority contexts for assessing any PSP implementation include measuring and information for each of the following four domains:
    • Structural organizational characteristics (such as size, location, financial status, existing quality and safety infrastructure);
    • External factors (such as regulatory requirements, the presence in the external environment of payments or penalties such as pay-for-performance or public reporting, national patient safety campaigns or collaboratives, or local sentinel patient safety events);
    • Patient safety culture (not to be confused with the larger organizational culture), teamwork, and leadership at the level of the unit; and
    • Availability of implementation and management tools (such as staff education and training, presence of dedicated time for training, use of internal audit-and-feedback, presence of internal or external people responsible for the implementation, or degree of local tailoring of any intervention).

These principles guided our search for evidence and the way we present our findings in this report (see Table 3).

Table 3, Chapter 2. Format for in-depth reviews.

Table 3, Chapter 2

Format for in-depth reviews.

Evidence Review Process

As already noted, this report presents two types of evidence reviews: in-depth reviews and brief reviews. In this section, we describe the general methods for each type of review. The details of the review processes for individual topics (for example, the search strategies and flow of articles) varied by topic and are described in Appendix C. The evidence reviews were conducted by the project team. Figure 3 presents an outline of the general methods for each type of review.

Figure 3 presents a flow diagram of the general methods for the two types of evidence review conducted for this report: Brief Reviews and In-depth reviews. The figure has two columns: The first column provides a decision tree and explains the general methods that were used to conduct an in-depth review. The second column describes the methods that were used to conduct a brief review. The processes depicted in this figure are described in detail in the paragraphs above it. Detailed methods and search strategies for each topic are described in Appendix C.

Figure 3, Chapter 2

Evidence review process.

In-Depth Reviews

Many of the 18 topics designated for an in-depth review were likely to have been the subject of a previous systematic review; thus, the review process usually began with a search to identify existing systematic reviews. To assess their potential utility, we followed the procedures proposed by Whitlock and colleagues5 which essentially meant addressing the following two questions:

  • Is the existing review sufficiently “on topic” to be of use? and
  • Is it of sufficient quality for us to have confidence in the results?

Assessment of whether a review was sufficiently “on topic” was a subjective judgment based on the patients-intervention-comparators-outcomes-timeframe (PICOT) focus of the existing review. To assess the quality of the systematic review, we, in general, used the AMSTAR criteria (see Appendix B).6 If an existing systematic review was judged to be sufficiently “on topic” and of acceptable quality, then based on that review, the following searches were undertaken:

  • A full update search, in which databases were searched for new evidence published since the end date of the search in the existing systematic review; and/or
  • A search for “signals for updating,” according to the criteria proposed by Shojania and colleagues,7 which involved a search of high-yield databases and journals for “pivotal studies” whose results might be a signal that a systematic review is out-of-date.
  • Based on the results of these searches, the existing review was supplemented with newer evidence or considered to be up-to-date.

Any evidence identified via the update search or the “signals” search was added to the evidence base from the existing systematic review.

For some topics, no systematic review could be identified, or those that were identified were either not sufficiently relevant or not of sufficient quality to be used. In those situations, new searches were done using guidance as outlined in the EPC Methods Guide.8

As indicated above, evidence about context, implementation, and adoption are key aspects of this review. We searched for evidence on these topics in two ways:

We looked for and extracted data about contexts and implementation from the articles contributing to the evidence of effectiveness;

We identified “implementation studies” from our literature searches. “Implementation studies” focus on the implementation process, especially those elements of the implementation demonstrated or believed to be of particular importance for the success, or lack of success, of the intervention. To be eligible, implementation studies needed to either report, or be linked to reports of, effectiveness outcomes.

Brief Reviews

Brief reviews are explicitly not full systematic reviews or updates. The goals of the brief reviews varied by PSP, according to the needs of stakeholders. The assessment could focus primarily on information about effectiveness of an emerging PSP or implementation of an established PSP; alternatively, the review could explore whether new evidence calls into question the effectiveness of an existing PSP. Thus, the methods used to conduct the brief reviews varied according to the various goals of the reviews. . However, in general, brief reviews were conducted by an expert in the topic in collaboration with the project team, and involved focused literature searches for evidence relevant to the specific need. This evidence was then narratively summarized in a format that also varied with the particular goal.

Assessing Quality of Individual Studies

In general, to assess the quality, or risk of bias, of individual studies contributing evidence of effectiveness to in-depth reviews, we used the criteria published on the Cochrane Effective Practice and Organisation of Care (EPOC) Web site.9 This Cochrane Group is devoted to reviews of interventions designed to improve the delivery, practice, and organization of health care. Thus, it uses quality/risk of bias assessment instruments that are applicable to numerous study designs; criteria are available for controlled-before-and-after studies and for time series studies, as well as for randomized trials.

For the many topics included in this review for which we identified an existing systematic review as a starting point for our review, we accepted the original review's assessment of the quality/risk of bias of included studies. In other words, we did not re-score the original studies included in an existing systematic review for risk of bias. A consequence of this decision is that we did not apply the EPOC criteria to assess quality/risk of bias for some topics in this report, but instead relied on the criteria originally chosen for that review, for example the criteria of the U.S. Preventive Services Task Force.

Implementation studies were not assessed for their quality, as we lacked evidence or expert opinion about the criteria for such an assessment.

Assessing Strength of Evidence for a Patient Safety Practice

Table 4 shows the scheme we employed for assessing the strength of the body of evidence regarding a specific PSP. This scheme starts with elements taken from the EPC Methods Guide on strength of evidence,10 which itself borrows largely from the GRADE scheme,11,12 and incorporates elements about theory, implementation, and context taken from the prior AHRQ “ “Context Sensitivity” report.3 It includes an assessment of the risk of bias, by whatever criteria were used for a particular PSP, and then adjusts the strength up or down based on standard GRADE criteria and on criteria about the use of theory and description of implementation. The points for scoring are meant only as a guide. Implementation studies were not assessed for strength of evidence.

Table 4, Chapter 2. Criteria for assigning strength of evidence for effectiveness/harms questions.

Table 4, Chapter 2

Criteria for assigning strength of evidence for effectiveness/harms questions.

Summarizing the Evidence

We expected that users of this report would want a summary of the evidence for each topic. Such summary messages may facilitate uptake of the findings. We summarized the evidence according to the following domains:

Scope of the problem. In general, we addressed two issues: (1) the frequency of the safety problem, and (2) the severity of each average event. For benchmarks, we regarded safety problems that occur approximately once per 100 hospitalized patients, as “common;” examples include falls, venous thromboembolism (VTE), potential adverse drug events, or pressure ulcers. In contrast, events an order of magnitude or more lower in frequency were considered “rare;” such events include inpatient suicide and surgical items left inside the patient. The scope must also consider the severity of each event: most falls do not result in injury, and most potential adverse drug events do not result in a clinical harm. However, each case of inpatient suicide or wrong site surgery is devastating.

Strength of evidence for effectiveness. In general, this assessment follows the framework for strength of evidence presented above.

Evidence on potential for harmful unintended consequences. Most PSP evaluators have not explicitly assessed the possibility of harm. Consequently, this domain includes evidence of both actual harm and the potential for harm. The ratings on known or potential harms ranged from high risk of harm to low (or negligible); in some cases, the evidence was too sparse to provide a rating.

Estimate of costs. This domain is speculative, because most evaluations do not present cost data. However, we judged that readers would want at least a rough estimate of cost. Therefore, we used the following categories and benchmarks, noting in places the factors that might cause cost estimates to vary.

  • Low cost: PSPs that did not require hiring new staff or large capital outlays, but instead involved training existing staff and purchasing some supplies. Examples would include most falls prevention programs, VTE prophylaxis, or medical history abbreviations designated, “Do Not Use.”
  • Medium cost: PSPs that might require hiring one or a few new staff, and/or modest capital outlays or ongoing monitoring costs. Examples would include some falls prevention programs, many clinical pharmacist interventions, or participation in the American College of Surgeons Outcomes Reporting System ($135,000/year).
  • High cost: PSPs that required hiring substantial numbers of new staff, considerable capital outlays, or both. Examples would include computerized order entry (because it requires an electronic health record), having to hire many nurses to achieve a certain nurse-to-patient ratio, or facility-wide infection control procedures (estimated at $600,000 year for a single intensive-care unit [ICU]).

Implementation issues. This section summarizes how much we know about how to implement the PSP, and how difficult it is to implement. To approach the question of how much we know, we considered the available evidence about implementation, the existence of data about the effect of context and the influence of context, the degree to which a PSP has been implemented, and the presence of implementation tools such as written implementation materials or training manuals.

For the question of implementation difficulty, we use three categories: difficult for PSPs that required large scale organizational change; not difficult for PSPs that required protocols for drugs or devices such as those to reduce radiation exposure or to help prevent stress-related gastrointestinal bleeding; and moderate for PSPs falling between the extremes.

Setting Priorities for Adoption of Patient Safety Practices

After obtaining critical input from our TEP about the dimensions and benchmarks used for summarizing the evidence, we next solicited their views on whether the evidence was sufficient at present to encourage wider adoption of some of the PSPs. Specifically, we asked our TEP the following questions:

We are asking for your global judgment of the priority for adoption of the PSPs that are included in our report. By “global judgment,” we mean that you will be making a summary judgment, which considers all the factors discussed in the chapters and listed in the summary table (the magnitude of the current safety problem [in terms of frequency and severity], the degree to which the PSP can improve safety outcomes, any potential for unintended consequences, what we know and how hard it is to implement the PSP, and the cost) plus your own experience as a researcher, provider, policymaker, or PSP developer. We have chosen a four-category scheme for this judgment:

THIS PSP SHOULD BE STRONGLY ENCOURAGED—We know enough now that if we were choosing a hospital (or nursing home or ambulatory care center, etc.) to get care from, we would choose a hospital (or nursing home or ambulatory care center, etc) that was implementing this PSP over one which was not. Another way of thinking about this might be: unless the hospital (or nursing home or ambulatory care center, etc) knows its outcomes for this safety problem are already excellent (or the safety problem is not relevant for the setting, such as failure-to-rescue in an ambulatory care center), then it ought to be implementing this PSP. We would expect over the next 3 years that most organizations would implement this PSP, even if it has substantial cost. “Most” does not have a precise definition but it does not mean 51% nor does it mean 95%. Let's say it means about 70-80%.

THIS PSP SHOULD BE ENCOURAGED—This is a PSP that we'd like to be implemented at the hospital (or nursing home or ambulatory care center, etc.) where we would receive our care, but there's just enough uncertainty about the effect, or concern about the cost, or some other factor, to keep us from putting it on the “strongly encouraged” list. We would expect that over the next 3 years many organizations would implement this PSP, and high cost might be a significant factor in an organization's decision.

THIS PSP IS STILL DEVELOPMENTAL—There's still more that needs to be known about this PSP before we should be encouraging health care providers to adopt it. Organizations implementing these PSPs should be encouraged to publish evaluations of their implementation and effectiveness in order to increase the evidence base for the PSP.

THIS PSP SHOULD BE DISCOURAGED—This PSP is one where we're pretty sure the cost or difficulty implementing it is not worth the potential benefit, or even that the harms or potential for harms exceeds the evidence of benefit.

As in prior group judgment processes, we also provide a response option “I DO NOT WANT TO RATE THIS PSP” so that people are not forced to make decisions about PSPs they feel unprepared to assess, AND we can distinguish between that decision and an inadvertent “skipped” PSP.

We received input from 19 of the 21 members of the TEP; the remaining two declined to rate the PSPs because they judged that making these kinds of clinical and policy decisions was not within their area of expertise. Based on the judgments of the panelists, we classified the PSPs according to the following rules:

  • Strongly Encouraged: To be classified as “strongly encouraged,” a PSP had to receive a rating of “strongly encourage” or “encourage” from 75 percent or more of the technical experts, no TEP member could rate the PSP as “this PSP should be discouraged,” and a majority of the “strongly encourage/encourage” ratings had to be “strongly encourage.”
  • Encouraged: To be classified as “encouraged,” a PSP had to receive a rating of “strongly encourage” or “encourage” from 75 percent or more of the technical experts, and a majority of the “strongly encouraged/encourage” ratings had to be “encourage.”

In any such process, the thresholds are somewhat arbitrary and can magnify the apparent impact of small differences in ratings. Therefore, we also assessed PSP at the threshold between “strongly encourage” and “encourage” (two PSPs received equal numbers of votes for each category) and the threshold between “encourage” and no rating (four additional PSPs). For these additional ratings, we used a four-person subset of our TEP, the people actually responsible for policymaking or implementing PSPs. For each of our “threshold” PSPs, we judged that three of these four technical experts needed to either “encourage” or “strongly encourage” the PSP, to retain its “strongly encouraged” or “encouraged” Classification. This determination resulted in one PSP being down-rated from “strongly encouraged” to “encouraged,” and affirmed that all four PSPs that made it by one vote should be classified as “encouraged.”

Future Research Needs

To assess future research needs with respect to PSPs, we first devoted 2 hours of discussion time at the face-to-face meeting of the TEP to this topic. Two project team members recorded both general and specific topics for future research that the TEP discussed. From these notes we obtained themes or domains that we used to organize the future research needs. To these we added future research needs for specific PSPs suggested by the individual team members who reviewed the literature on those PSPs. We then sought input from the TEP regarding which future research needs were highest priority, and classified as high priority those topics receiving more than 50 percent support.

Peer and Public Review Process

The draft of this report was posted for public comment and sent to six peer reviewers and our TEP for review.

References

1.
Evidence-based Practice Center Systematic Review Protocol: Critical Analysis of the Evidence for Patient Safety Practices. Rockville, MD: Agency for Healthcare Research and Quality; Nov 9, 2011. http:​//effectivehealthcare​.ahrq.gov/search-for-guides-reviews-and-reports​/?pageaction​=displayproduct&productid​=840.
2.
Shojania KG, Duncan BW, McDonald KM, et al., editors. Making Health Care Safer: A Critical Analysis of Patient Safety Practices. Evidence Report/Technology Assessment No. 43. Rockville, MD: Agency for Healthcare Research and Quality; Jul, 2001. (Prepared by the University of California at San Francisco–Stanford Evidence-based Practice Center under Contract No. 290-97-0013.) AHRQ Publication No. 01-E058. www​.effectivehealthcare.ahrq.gov. [PubMed: 11510252]
3.
Shekelle PG, Pronovost PJ, Wachter RM, et al. Assessing the Evidence for Context-Sensitive Effectiveness and Safety of Patient Safety Practices: Developing Criteria. Rockville, MD: Agency for Healthcare Research and Quality; Dec, 2010. (Prepared under Contract No. HHSA-290-2009-10001C.). AHRQ Publication No. 11-0006-EF. www​.effectivehealthcare.ahrq.gov.
4.
Taylor SL, Dy S, Foy R, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf. 2011;20(7):611–7. [PubMed: 21617166]
5.
Whitlock EP, Lin JS, Chou R, et al. Using existing systematic reviews in complex systematic reviews. Ann Intern Med. 2008;148(10):776–82. [PubMed: 18490690]
6.
Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10. [PMC free article: PMC1810543] [PubMed: 17302989]
7.
Shojania KG, Sampson M, Ansari MT, et al. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med. 2007;147(4):224–33. [PubMed: 17638714]
8.
Relevo R, Balshem H. Chapter 3. Finding Evidence for Comparing Medical Interventions. Rockville, MD: Agency for Healthcare Research and Quality; Apr, 2012. Methods Guide for Effectiveness and Comparative Effectiveness Reviews. AHRQ Publication No. 10(12)-EHC063-EF.
9.
Cochrane Effective Practice and Organisation of Care Group (EPOC). 2011. [cited July 20, 2012; http://epoc​.cochrane​.org/sites/epoc.cochrane​.org/files/uploads​/EPOC%20Study%20Designs%20About.pdf.
10.
Owens DK, Lohr KN, Atkins D, et al. AHRQ series paper 5: grading the strength of a body of evidence when comparing medical interventions--agency for healthcare research and quality and the effective health-care program. J Clin Epidemiol. 2010;63(5):513–23. [PubMed: 19595577]
11.
Atkins D, Eccles M, Flottorp S, et al. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res. 2004;4(1):38. [PMC free article: PMC545647] [PubMed: 15615589]
12.
Grading of Recommendations Assessment Development and Evaluation (GRADE) Working Group. [20 July 2012]. www​.gradeworkinggroup.org.