PubMed Health. A service of the National Library of Medicine, National Institutes of Health.

Timbie JW, Ringel JS, Fox DS, et al. Allocation of Scarce Resources During Mass Casualty Events. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Jun. (Evidence Reports/Technology Assessments, No. 207.)

Methods

Overview

The methods for this systematic review broadly follow those outlined in the Agency for Healthcare Research and Quality (AHRQ) Methods Guide for Effectiveness and Comparative Effectiveness Reviews (available at www.effectivehealthcare.ahrq.gov/methodsguide.cfm). To the degree feasible, our methods and analyses were determined a priori. However, in the course of identifying studies we modified the comparative effectiveness review (CER) protocol to better align with the types of studies we encountered. In particular, we found few studies that compared strategies in a head-to-head fashion and therefore included all studies that had a valid control group. In addition, because of the paucity of evidence we were finding in support of existing resource allocation strategies, we decided to compile a summary of evidence from studies that might have otherwise been excluded either because they lacked comparison groups or because they represented consensus guidelines from clinical experts or policymakers.

Because of extreme heterogeneity in the types of resource allocation strategies we encountered and the small number of studies addressing any particular strategy, we did not consider meta-analysis or other form of quantitative analysis. Rather, we reviewed individual strategies within meaningful categories (discussed below), providing synthesis to the extent that multiple studies addressed a similar topic.

In the remaining sections of this chapter we describe our conceptual framework; the PICOTS (populations, interventions, comparators, outcomes timings, and settings) framework that guided our literature search strategy and served as our analytic framework; inclusion and exclusion criteria; study selection process; data extraction and quality assessment procedures; approach to data synthesis, and our assessments of the strength and applicability of the evidence. The contents of this section (and the larger report) are informed by the PRISMA checklist for reporting systematic reviews.19

Topic Refinement and Review Protocol

AHRQ’s Scientific Resource Center (SRC) and its cosponsoring agency, the Office of the Assistant Secretary for Preparedness and Response (ASPR), developed the research topic and its four Key Questions. Investigators at the Southern California Evidence-based Practice Center then refined the questions in consultation with a technical expert panel (TEP) appointed by AHRQ. The SRC approved the final version of the review protocol prior to the start of the review.

Technical Expert Panel and Expert Consultants

The TEP convened for this project included experts from the fields of public health, disaster preparedness and response, hospital medicine, transplant surgery, adult and pediatric emergency medicine, nursing, law, health care ethics, military medicine, risk communication, and public engagement.

We solicited additional input from two subject matter experts, neither of whom served on the TEP. Both experts were nationally recognized experts in disaster medicine and health system preparedness and were drawn from the private (academic) and public sector, respectively. Both experts helped to refine our methodology and identify additional sources of studies for the review.

Conceptual Framework

The conceptual framework for our evidence review is depicted in Figure 2. It illustrates the broad categories of adaptive strategies developed and used by policymakers and health care providers to allocate scarce resources during mass casualty events (MCEs) and how the thinking and actions of both groups are modified by the outcomes of these strategies and by public opinion. As illustrated in the figure, policymakers and providers develop and implement strategies using an escalating series of contingent actions, based on the nature, magnitude, scope, and duration of the MCE.

The conceptual framework reflects the development and use by policymakers and providers of strategies that initially seek to maximize existing resources by: reducing less urgent demand for non-essential health care services; optimizing the use of existing resources; and augmenting available resources. When these strategies fail to meet patient care needs, providers and policymakers may implement “crisis standards of care.” Strategies used by policymakers and providers are the focus of Key Questions 1 and 2. Ultimately, the strategies implemented by policymakers and providers influence individual and population outcomes through processes of care dictated by these strategies and their health outcomes and other outcomes. The outcomes of each strategy shape new strategies. Providers or policy makers may integrate preferences of the general public in the design of these strategies. This is the focus of Key Question 3. Providers may engage one another or policy makers to develop strategies. Key Question 4 assesses the comparative effectiveness of different engagement strategies.

Figure 2

Conceptual framework for allocating and managing scarce medical resources during a mass casualty event. KQ = Key Question

During surge conditions, policymakers and providers will initially use strategies that have the goal of maximizing existing resources by:

  • Managing or reducing less-urgent demand for health care services
  • Optimizing the use of existing resources
  • Augmenting available resources.

Many of these “resource maximization” strategies are aimed at extending use and making management of resources more efficient to forestall the development of serious shortages. If these measures prove to be inadequate, health care facilities may seek to augment existing resources by tapping stockpiles, invoking mutual aid agreements, and exercising other options.

The ultimate goal of these strategies is to preserve generally accepted standards of care. Specific examples of each type of strategy are included in our PICOTS framework discussed later in this chapter.

If these contingency measures are inadequate to meet extremely excessive demand, the institution may be forced to relax standards of care. The allocation or reallocation of resources under crisis conditions that may reduce the level of care delivered to individual patients is commonly referred to as “crisis standards of care.” Typically, these strategies are not employed unless every effort to maximize available resources has been exhausted. Under crisis standards of care, institutions and providers may shift their approach to allocating resources from one designed to maximize the outcome of each patient to one that seeks to do the greatest good for the largest number of people. Aside from strictly utilitarian goals, crisis standards of care may also have other objectives, such as preserving the long-term functioning of society. During a prolonged MCE, the health care system may shift into and out of “crisis care” over time, as the event evolves and stocks of supplies, equipment, and personnel rise and fall. Thus, multiple strategies may be sequentially employed during an MCE depending on its magnitude and duration, rate of onset, available resources, and the capacity of the medical care system.

The resource allocation strategies deployed by policymakers and providers influence individual and population outcomes through both processes of care and health outcomes. Other outcomes, including the ethical and economic consequences of these strategies, may also be important to providers, policymakers, and the public.

The outcomes of each strategy shape the refinement or development of new strategies—indicated in Figure 2 by feedback loops (dashed lines). For example, outcomes of strategies, particularly adverse outcomes, might provoke strong reaction from the public. Providers or policymakers may then integrate the expressed preferences of the general public into new or updated strategies. Provider engagement activities might inform the strategies developed by policymakers, while, at the same time, the planning efforts of policymakers might also serve as a catalyst for providers to engage in efforts to develop strategies to respond to MCEs.

While this conceptual framework was developed for the purposes of guiding this review, key elements draw directly on the Letter Report published by the Institute of Medicine (IOM) Committee on Guidance for Establishing Standards of Care for Use in Disaster Situations.13

Analytic Framework

Given the heterogeneity in key aspects of study design across the four Key Questions, we elected to use the PICOTS framework as the analytic framework for the review. We present this framework separately for each Key Question below.

Search Strategy

Our search strategy leveraged existing reviews of the literature, including but not limited to those considered in the IOM Letter Report and Summary on Crisis Standards of Care13, 20 and the Community Planning Guide on Providing Mass Medical Care with Scarce Resources, developed by AHRQ and ASPR.16 These reviews helped identify relevant medical care resource management and allocation strategies in existence at the time these documents were published and summary information on the relevant outcomes of these strategies. Building on this work helped us focus our search.

Our literature search comprised four parts: (1) a formal search using multiple research databases, (2) a scan of the “grey” literature,a (3) a review of current State plans regarding the allocation of scarce resources, and (4) consultation with our TEP for any additional sources. In addition to using an expert, in-house research librarian with special skills in health information, we benefitted from the services of an expert librarian at the National Institutes of Health (NIH) who had previously conducted literature searches on this topic on behalf of ASPR.

Because of the cross-disciplinary nature of this topic, our formal literature search used research databases beyond those covering the biomedical literature. In consultation with our TEP, we selected seven academic databases: PubMed, Scopus, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Global Health, Web of Science®, and the Cochrane Database of Systematic Reviews. We also searched online library catalogs, such as the National Library of Medicine’s LocatorPlus, to identify relevant books. Each search spanned the period from January 1990 through November 2011. We constructed search algorithms for each database (Appendix A), executed the search, downloaded the results into individual EndNote libraries, combined libraries from each search, and deleted duplicate references. Using the Web of Science® database, we also conducted “forward searches” to identify articles that cited key references.

Our search of the grey literature was confined to the New York Academy of Medicine’s Grey Literature Report—one of the few existing databases that covers grey literature sources. We did not pursue additional searches of the grey literature (e.g., LexisNexis) out of concern that these sources might not provide the high-quality evidence needed to satisfy our inclusion and exclusion criteria.

Individual members of the TEP provided additional relevant studies, particularly those that were not published in the peer-reviewed literature. These studies included work that was funded by the Centers for Disease Control and Prevention and the Veterans’ Health Administration, professional society guidelines, and research produced by nongovernmental organizations such as Trust for America’s Health. We compiled a list of these sources and used scans of related Web sites to broaden our search.

An additional element of this project was a review of State plans for allocating scarce health care resources during MCEs. Officials at ASPR provided a sample of current State plans for analysis representing 11 States and the territory of Guam. Because there is no central national repository for this information, this list is unlikely to be exhaustive and may be regarded as a snapshot of current State-level efforts to define resource allocation principles and protocols.

Inclusion and Exclusion Criteria

Prior to designing our search strategy, we framed each of the four Key Questions along six dimensions that are commonly used in CERs: populations, interventions, comparators, outcomes timings, and settings (PICOTS). This section describes these dimensions and the resulting inclusion and exclusion criteria for each of the Key Questions, as well as general inclusion and exclusion criteria.

General Criteria

  • Include articles found in the peer-reviewed and grey literatures, including but not limited to empirical studies, State and Federal government reports, State and Federal plans, peer-reviewed reports and papers by nongovernmental organizations, policy and procedure documents, and clinical care guidelines developed by specialty societies.
  • Include studies from both U.S. and international sources.
  • Include English- and non–English-language publications.
  • Include the following:
    • Randomized controlled trials.
    • Observational studies reporting data from real events, drills, exercises, or computer simulations.
    • Recommended strategies proposed by national provider groups and/or task forces or work groups convened by or comprising representatives of the Federal government.
    • Studies reporting the outcomes of systematic data collection efforts (e.g., focus groups) that document patients’ perspectives on resource allocation during MCEs.
    • Systematic reviews of strategies to allocate resources during an MCE.
  • Exclude studies published prior to 1990.
  • Exclude publications that present only conceptual frameworks.
  • Exclude nonsystematic reviews.
  • Exclude studies that do not consider these strategies in the context of an MCE.

Key Question 1. What Strategies Are Available to Policymakers To Optimize Allocation of Scarce Resources During MCEs?

PICOTS Framework for Key Question 1

Population

The target population includes policymakers charged with responsibility for developing and implementing strategies to optimize allocation of resources during MCEs. The affected population includes people who require medical treatment after an MCE. This group includes those who are physically injured and/or ill as a direct or indirect result of the MCE and those with unrelated, but urgent, medical needs (e.g., treatment for heart attacks, stroke, kidney failure, or cancer). We also address behavioral health needs in the setting of MCEs, including acute stress, grief, psychosis, and panic reactions.

Interventions

Strategies used by policymakers to maximize scarce resources. These include actions to manage or reduce less-urgent demand for health care services, optimize existing resources, or augment the supply of existing resources, and, when these actions are inadequate, to implement strategies consistent with crisis standards of care. Potential strategies included the following:

  • Strategies focused on single or multiple components of the health system, including emergency medical services and dispatch, public health, hospital-based care, renal dialysis, home care, primary care, palliative care, mental health, and provider payment policies.
  • Actions taken in advance to prepare for large-scale public health events that could trigger a huge surge in demand for medical and health care resources (e.g., stockpiling).
  • Adaptive strategies that ensure effective incident command, control, intelligence gathering, and communication systems, since these are often necessary channels to implement other strategies that optimally manage and allocate resources.
    • Actions taken to maximize resources to avoid the need to shift to crisis standards of care—for example, actions to substitute, conserve, adapt, and/or reuse critical resources, including reuse of otherwise disposable equipment and supplies, expanding scope of practice laws, and altered approaches that maximize delivery of care.13
    • Actions taken to reduce or manage less-urgent demand for health care services in order to avoid the need to adopt a crisis standard of care—for example, activating call centers or Web sites that provide information about when and where to seek treatment and how to adequately care for oneself or family members at home.
    • Strategies for making ethical allocation decisions when critical resources will otherwise be insufficient to meet the population’s needs (i.e., “crisis standards of care”).
Comparators

Where possible, we considered studies that compared an intervention with one or more alternative interventions. We also considered studies that compared an intervention with no intervention (i.e., no change in the approach to resource allocation or management). Studies that demonstrated the feasibility of a novel technique or technology without a comparison group were not included in the full CER, but were summarized in a separate section.

Outcomes

Included outcomes depended on the type of intervention and represented one or a combination of the following:

  • Process measures (e.g., number of patients treated, amount of resources obtained, ability to maintain conventional standards of care, avoidance of crisis standards of care)
  • Health outcomes
    • Favorable (e.g., decreased mortality, decreased physical and/or psychological morbidity)
    • Unfavorable (e.g., adverse events, such as preventable morbidity and/or mortality)
  • Other outcomes (e.g., ethical, legal, financial consequences; public perceptions of the intervention, public acceptance of or compliance with the intervention)
Timing

We confined our review to studies addressing preparedness and response to MCEs. We also considered strategies that address the triggers or timing for returning to normal operations. We only considered strategies specifically addressing long-term recovery from MCEs (e.g., community resilience) if these strategies were implemented during the course of an MCE, and not subsequent to an MCE.

Settings

All settings in which patient care might be directed/managed and delivered, including but not limited to prehospital triage locations (e.g., on-scene, in transport), emergency department triage and care, inpatient settings (e.g., operating room, intensive care unit, ward, community health centers, urgent care facilities, long-term care institutions, primary and specialty care practices, skilled nursing facilities, home care agencies, and alternate care facilities.

Inclusion and Exclusion Criteria

  • Include studies that describe the processes and/or outcomes of strategies used by policymakers or studies that result from the strategic direction provided by policymakers to maximize and allocate scarce resources during an MCE. (See the Definitions section for descriptions of policymakers, scarce resources, and MCEs.)
  • Include if the strategy has been prospectively tested in a real event or tested in the context of an exercise, drill, or computer simulation.
  • Include if the strategy arose from a documented after-action report of a real event as long as the study describes a specific, implementable strategy and systematically reports the outcomes of the strategy, whether or not a comparison group was used.
  • Include if the strategy has not been tested but rather proposed by a national provider organization or a task force convened by the Federal government. Studies must describe the method by which consensus was achieved by the committee, panel, or work group, which may include, but is not limited to, the Delphi process.
  • Exclude if the study does not describe a specific, implementable strategy.
  • Exclude if the strategy does not relate to scarce resources.
  • Exclude if the study does not report the outcomes of a strategy, including studies that report only “lessons learned” from a real event, drill, or exercise.
  • Exclude if the proposed strategy is not from a national provider organization or a task force convened by the Federal government or does not describe the consensus development process.

Key Question 2. What Strategies Are Available to Providers To Optimize Allocation of Scarce Resources During MCEs?

PICOTS Framework for Key Question 2

Population

The target population includes health care providers who hold responsibility for allocating scarce resources during MCEs. The affected population includes people who require medical treatment after an MCE. This group includes those who are physically injured and/or ill as a direct or indirect result of the MCE and those with unrelated but urgent, medical needs (e.g., treatment for heart attacks, stroke, kidney failure, or cancer). We also address behavioral health needs in the setting of MCEs, including acute stress, grief, psychosis, and panic reactions.

Interventions

Strategies used by providers to maximize scarce resources. These include actions to manage or reduce less-urgent demand for health care services, optimize existing resources, or augment the supply of existing resources, and, when these actions are inadequate, to implement strategies consistent with crisis standards of care. Potential strategies included the following:

  • Strategies focused on single or multiple components of the health system, including emergency medical services and dispatch, public health, hospital-based care, renal dialysis, home care, primary care, palliative care, mental health, and provider reimbursement.
  • Actions taken in advance to prepare for large-scale public health events that could trigger a huge surge in demand for medical and health care resources (e.g., training staff, exercising plans, stockpiling critical supplies and equipment).
  • Adaptive strategies that ensure effective incident command and communication systems, since these are often necessary channels to implement other strategies that optimally manage and allocate resources.
  • Actions taken to maximize resources in order to avoid the need to adopt a crisis standard of care; for example, actions to substitute, conserve, adapt, and/or reuse critical resources, including reuse of otherwise disposable equipment or supplies, reallocation of staff from nonclinical to clinical functions (i.e., expanding scope of practice), and altered approaches to using staff to deliver care.
  • Actions taken to reduce or manage less-urgent demand for health care services in order to avoid the need to adopt a crisis standard of care; for example, activating call centers or Web sites that provide information about when and where to seek treatment and how to adequately care for oneself or family members at home.
  • Strategies for making allocation decisions when critical resources will otherwise be insufficient to meet the population’s needs (i.e., “crisis standards of care”).
Comparators

Where possible, we considered studies that compared an intervention with one or more alternative interventions. We also considered studies that compared an intervention with no intervention (i.e., no change in the approach to resource allocation or management). Studies that demonstrated the feasibility of a novel technique or technology without a comparison group were not included in the full CER, but were summarized in a separate section.

Outcomes

A combination of any of the following:

  • Process measures (e.g., number of patients treated, amount of resources obtained, ability to maintain conventional standards of care, avoidance of crisis standards of care)
  • Health outcomes
    • Favorable (e.g., decreased mortality, decreased physical and/or psychological morbidity)
    • Unfavorable (e.g., adverse events, such as preventable morbidity and/or mortality)
  • Other outcomes (e.g., ethical, legal, financial consequences, public perceptions of the intervention, public acceptance of or compliance with the intervention)
Timing

We confined the review to studies addressing preparedness and response to MCEs. We considered strategies that address the triggers or timing for returning to normal operations. We only considered strategies specifically addressing long-term recovery from MCEs (e.g., community resilience) if these strategies were implemented during the course of an MCE, and not subsequent to an MCE.

Settings

All settings in which patient care might be delivered, including but not limited to prehospital triage locations (e.g., on-scene, in transport), emergency department triage and care, inpatient settings (e.g., operating room, intensive care unit, ward), community health centers, urgent care facilities, long-term care institutions, primary and specialty care practices, skilled nursing facilities, home care agencies, and alternate care facilities.

Inclusion and Exclusion Criteria

  • Include studies that describe the processes and/or outcomes of strategies used by providers to maximize or allocate scarce resources during an MCE. (See the definitions section for detailed descriptions of providers, scarce resources, and MCEs.)
  • Include if the strategy has been prospectively tested in a real event or tested in the context of an exercise, drill, or computer simulation.
  • Include if the strategy arose from a documented after-action report of a real event as long as the study describes a specific, implementable strategy and systematically reports the outcomes of the strategy, whether or not a comparison group was used.
  • Include if the strategy has not been tested but rather proposed by a national provider organization or a task force convened by the Federal government. Studies must describe the method by which consensus was achieved by the committee, panel, or work group, which may include, but is not limited to, the Delphi process.
  • Exclude if the study does not describe a specific, implementable strategy.
  • Exclude if the strategy does not relate to scarce resources.
  • Exclude if the study does not report the outcomes of a strategy, including studies that report only “lessons learned” from a real event, drill, or exercise.
  • Exclude if the proposed strategy is not from a national provider organization or a task force convened by the Federal government or does not describe the consensus development process.
  • Exclude strategies that involve training providers to allocate resources if the study reports only participants’ perceptions of improvement and/or satisfaction with the training program.

Key Question 3. What Are the Public’s Concerns Regarding Resource Allocation Strategies?

PICOTS Framework for Key Question 3

Population

The general public, with special attention paid to members of at-risk populations, including, for example, children and elders, individuals in minority groups, and individuals with special medical needs.

Interventions

Not applicable. This Key Question focuses on public opinions, perceptions, values, and norms regarding the development and implementation of strategies to allocate and manage scarce medical resources during an MCE.

Comparators

Studies may compare outcomes from a single setting when conventional standards of care are in effect, versus outcomes under constrained or crisis care standards. In addition, studies may compare outcomes of the same resource allocation strategy among individuals or communities with different characteristics, or they may compare outcomes of distinct resource allocation strategies in communities with similar characteristics.

Outcomes

Public opinions and/or perceptions of key issues related to the allocation and management of scarce medical resources during MCEs, including but not limited to values, priorities, and ethics.

Timing

We confined our review to studies addressing preparedness and response to MCEs. We also considered strategies that addressed the triggers or timing for returning to normal operations. We only considered strategies specifically addressing long-term recovery from MCEs (e.g., community resilience) if these strategies were implemented during the course of an MCE, and not subsequent to an MCE.

Settings

No exclusions.

Inclusion and Exclusion Criteria

  • Include studies that use a systematic data collection method (e.g., surveys, focus groups) to describe public opinion regarding the implementation of strategies for allocating scarce resources during an MCE.
  • Studies can consider the general population or subpopulations of interest, such as minority groups and other at-risk populations.
  • Exclude studies that do not report public opinion directly, such as those reporting providers’ or experts’ perceptions of public opinion.

Key Question 4. What Methods Are Available To Engage Providers in Developing Strategies To Optimize Resource Allocation During MCEs?

PICOTS Framework for Key Question 4

Population

Health care providers, including executive and administrative personnel, chief medical officers, and other health care providers who lead or staff health care facilities or facilities that provide auxiliary services (such as laboratories or pharmacy departments) and professional associations, all regardless of race, gender, ethnicity, religion, sexual orientation, or disability.

Intervention

Strategies for engaging providers in discussions regarding the allocation and management of scarce resources. Strategies for engaging providers include a wide range of activities intended to accomplish the following:

  • Contact and connect with providers (e.g., face-to-face, electronically, through provider associations).
  • Elicit dialogue and discussion with and among providers (e.g., through workshops, discussion groups, or tabletop exercises to develop a plan or protocol related to decision making during “crisis care” situations).
  • Encourage provider participation in collaborative activities (e.g., voluntary cooperative planning).
Comparators

Where possible, we considered studies that compared an engagement strategy to one or more alternative strategies. We also considered studies that used baseline assessments as the comparator. For example, studies might compare outcomes (including knowledge, attitudes, and self-reported or observed performance) over time (e.g., before and one or more times after an intervention). Other studies might not have used a comparator but, rather, assessed the impact of provider engagement on collaborative efforts at the local/regional, State, and national levels.

Outcomes

We considered any of the following outcomes:

  • Process outcomes (e.g., number of providers reached, provider satisfaction with the process)
  • Provider outcomes (e.g., changes in knowledge, attitudes, and self-reported or observed behavior)
  • Local/regional, State, national outcomes (e.g., increased provider participation in Multi-Agency Coordination [MAC] groups)
Timing

We confined our review to studies addressing preparedness and response to MCEs. We considered strategies that addressed the triggers or timing for returning to normal operations. We only considered strategies specifically addressing long-term recovery from MCEs (e.g., community resilience) if these strategies were implemented during the course of an MCE, and not subsequent to an MCE.

Settings

No exclusions.

Inclusion and Exclusion Criteria

  • Include studies that describe processes and outcomes of strategies used to engage providers in the development of strategies to allocate scarce resources during MCEs; for example, planning efforts to develop crisis standards of care protocols and the use of tabletop exercises to simulate medical decision making during “crisis care” situations.
  • Include if description of provider engagement is a replicable, systematic planning process that resulted in a concrete plan, protocol, strategy, or framework.
  • Include studies that describe engagement strategies for providers exclusively or that involve multiple stakeholders.
  • Include studies that describe engagement strategies locally (e.g., within a single medical center), as well as strategies for regional or nationwide engagement.
  • Exclude studies not related to provider engagement and surge capacity.
  • Exclude studies that involve educational interventions only and do not describe engagement in the development of educational programs.

Study Selection

After conducting the literature search, two researchers screened all titles to eliminate citations that were clearly unrelated to the topic. Next, abstracts of each study were independently reviewed by two researchers for inclusion or exclusion according to predetermined criteria. If no abstract was available, the full text was reviewed. Reasons for study exclusion at the abstract phase included the following: (1) failure to include a quantitative or qualitative analysis (e.g., studies reporting “lessons learned” only); (2) failure to address an MCE context (e.g., studies involving organ transplantation); and (3) failure to address a Key Question. In cases of disagreement between the reviewers, an independent reviewer was asked to review the abstract and reconcile the difference.

In the next stage, two researchers independently reviewed full-text articles and excluded those that: (1) failed to address a Key Question; (2) included consensus recommendations (for Key Questions 1, 2, and 4) that did not meet our evidence threshold; or (3) related to training exercises but did not report changes in actual performance outcomes. Disagreement between the reviewers about whether a study should be included was resolved by consensus. We maintained a list of studies that were excluded at the full-text review stage with the reason(s) for exclusion (Appendix D).

Data Extraction

We tailored our data extraction approach to each Key Question. Because of the large volume of studies describing tested strategies that were relevant to Key Question 1 and especially Key Question 2, we developed an electronic data collection form using DistillerSR (Appendix B) to capture the necessary data elements. For Key Question 3 and for our analysis of State plans, data were abstracted directly into spreadsheets because of the relatively small number of data elements required for each review. For Key Question 4, we used a paper-based data collection form (Appendix B). Although the number and type of data elements varied by Key Question, data elements generally included the following: study design, geographic location, type of MCE, description of the strategy, outcomes reported, and implementation facilitators and/or barriers. For Key Question 4, we were also concerned with the types of stakeholders participating in the engagement strategy.

A total of nine reviewers, all of whom received formal orientation to the review process, performed data extraction. At least two reviewers abstracted each article that met one or more inclusion criteria. One reviewer took the lead for reviewing the article, and the second reviewer fact-checked to assure consistency and accuracy of coding. Differences were resolved by consultation and, when necessary, adjudication.

Abstracted data that were entered into DistillerSR and spreadsheets were then edited and manipulated to generate evidence tables (Appendix C).

Quality (Risk of Bias) Assessment of Individual Studies

Given the relative rarity and unpredictability of MCEs, we anticipated that few, if any, relevant studies would use a randomized controlled study design, where validated instruments to assess methodological quality exist and are widely used.21 Given the diversity in study designs and outcomes we expected to encounter, we determined that a more generic quality rating system would be more feasible and allow greater comparability across studies. After conducting an environmental scan of existing rubrics and finding that no single scale seemed appropriate for our topic, we developed our own assessment scale. Our instrument combined two items drawn from the quality assessment scale from the Substance Abuse and Mental Health Services Administration’s National Registry of Evidence-based Programs and Practices, and items from two other scales commonly used to appraise the quality of qualitative research.2224 Appendix B contains all of our data collection instruments, including quality scales.

We used this composite scale to appraise the quality of studies addressing Key Questions 1, 2, and 4. The five individual items assessed whether or not (1) the level of detail used to describe the resource allocation strategy was adequate, (2) data collection was systematic (and if so, whether it was retrospective or prospective), (3) fidelity (defined as the degree to which the strategy was implemented consistently) was measured or could be inferred from the data provided, (4) generalizability of the findings was assessed, and (5) potential confounders to the strategy’s effectiveness were discussed. For Key Question 4, we excluded the item addressing confounders. For most items, reviewers could allocate up to two points. All quality scores are presented as the total number of points allocated in reference to the total number of points possible (e.g., “6 of 8 points”). Scoring each quality item may have entailed some degree of subjectivity; however, the pair of reviewers for each study reconciled any differences in scores for each item.

For two types of study designs--computer simulations and systematic reviews--we deviated from this approach because we believed more tailored quality items were appropriate and because valid scales were available, respectively. In our environmental scan, we identified one study25 that offered recommendations for modeling disaster responses in public health. We identified several key aspects of model quality from this study and adapted our quality instrument accordingly. Specifically, we eliminated the data collection and fidelity items and replaced them with two items that assessed the degree to which the authors justified their model assumptions and/or data inputs and the degree to which the authors performed robust sensitivity analyses (if at all). For systematic reviews, we used the AMSTAR instrument,26 an 11-item scale that measures such features as whether a comprehensive literature search was performed, whether duplicate study selection and data extraction were used, and whether or not the scientific quality of the included studies was assessed.

For Key Question 3, we elected to develop our own quality scale that reflected key differences in methodology across the small number of included studies. Using seven binary items, our scale assessed whether or not studies used a systematic data collection process, described in detail the subject recruitment methodology, recruited a representative sample, disclosed funding sources or sponsors, discussed limitations and generalizability, and permitted the results to be evaluated by an independent third-party.

Data Synthesis

We could not quantitatively synthesize data abstracted from the set of included studies because individual studies rarely addressed similar resource allocation strategies. Moreover, strategies that were assessed in multiple studies typically differed widely in their context and outcomes. Accordingly, for Key Questions 1 and 2, we summarized the outcomes of each strategy qualitatively, using the four broad categories of adaptive strategies described in our conceptual framework to synthesize our findings. To the extent that clusters of related strategies emerged within these four broad categories, we reported our findings at the subcategory level. Wherever possible, we described the degree of consistency in the magnitude and direction of outcomes for the most relevant outcomes. We also highlighted differences in populations, context, and methodology that we considered important in interpreting each set of results. Most of the information we present in our synthesis addresses key dimensions of the subsequent strength of evidence ratings and assessment of applicability.

Because the included studies for Key Question 3 addressed a narrow range of topics, we synthesized the evidence from these studies as a single set. For Key Question 4, we described engagement strategies that were led by providers separately from those that were led or co-led by policymakers. However, (as described below), we summarized the strength of evidence across both groups of studies because the nature of strategies did not differ systematically between the two groups.

For the subset of studies that we included in the review that lacked comparison groups, we provide a brief summary of the individual strategies described by each. We include these summaries in a separate section from those studies that underwent our full review. Finally, we include a qualitative summary of proposed strategies that have been included in consensus guidelines. We highlight the key recommendations from each provider organization or task force and emphasize differences in recommendations where they exist.

Strength of the Evidence

We used the approach outlined in the Methods Guide for Effectiveness and Comparative Effectiveness Reviews to grade the strength of evidence addressing each Key Question.27 This approach requires assessment in four domains: risk of bias, consistency, directness, and precision. Risk of bias refers to the internal validity of each study and relies heavily on study design and the aggregate quality of the included studies; we scored risk of bias as high, medium, or low. Consistency is a measure of the extent to which effect sizes for the set of studies are similar in size and direction. We designated evidence in this category as consistent or inconsistent. Directness refers to the degree to which the strategies have an impact on health outcomes rather than intermediate outcomes. In this domain we rated evidence as direct or indirect. Finally, precision refers to the level of certainty surrounding the set of effect estimates. For this domain, we rated evidence as precise or imprecise. After making assessments in the four domains, we graded the strength of the evidence using the four-point scale (i.e., high, moderate, low, or insufficient). As defined by Owens et al., “high” strength of evidence indicates high confidence that the evidence reflects the true effect. “Insufficient” strength of evidence indicates that evidence either is unavailable or does not permit the formulation of conclusions.27

For Key Questions 1 and 2 we rated the strength of evidence within categories (or subcategories) depending on the number of studies available. For both Key Questions 3 and 4, we rated the strength of evidence across all studies. For Key Question 3, the paucity of studies precluded analysis by methodology (stakeholder forums, interviews or surveys). For Key Question 4, the vast majority of studies assessed engagement methods that were designed to develop strategies in multiple categories, and so category-specific ratings were less useful.

A single reviewer graded the strength of evidence for each dimension, which was then reviewed by a second reviewer. Differences were reconciled through discussion. We determined overall strength of evidence grades in an analogous manner using a qualitative assessment of the scores for each dimension. We summarize the strength of evidence grades in the Results section for each Key Question.

Applicability

In the course of our team’s work, we considered the applicability of the evidence presented by each article. In seeking to develop MCE resource allocation strategies, providers and policymakers will want to know the extent to which outcomes realized in the studies we reviewed are generalizable to the populations, practice settings, and disaster contexts that are most relevant to them. We conducted qualitative assessments28 of the applicability of evidence for each Key Question using both the PICOTS framework for each Key Question (see Key Questions, above) and by abstracting individual items pertaining to various dimensions of applicability. For example, we noted whether strategies were applicable to specific scales of events (e.g., local or regional in scope), whether or not the effectiveness of the strategy appeared to depend on factors unique to the jurisdiction involved (in terms of leadership required, populations served, stakeholders included, or availability of resources), the degree to which outcomes were relevant to patients, and the extent to which the strategy was “ready for use.” For strategies tested outside of the United States, we also assessed the degree to which the strategy was applicable in the United States. One reviewer assessed the applicability of the evidence, while a second reviewer verified the appropriateness of the assessments. Areas of disagreement were resolved through discussion and, if necessary, adjudication.

Peer Review and Public Commentary

Experts from relevant fields and individuals representing stakeholder and user communities were invited to provide external peer review of this systematic review. The AHRQ Effective Healthcare Program SRC at Oregon Health Sciences University oversaw the peer review process. Peer reviewers commented on the content, structure, and format of the evidence report and were encouraged to suggest any relevant studies that had been missed. AHRQ and SRC staff also reviewed the report.

The SRC placed the draft report on the AHRQ Web site (http://effectivehealthcare.ahrq.gov/) for public comment and compiled all comments.

Each member of our TEP was invited to provide written comments on the draft report. We compiled all comments and addressed each comment individually, making revisions as appropriate. All changes were documented in a “disposition of comments report” that will be made available three months after AHRQ posts the final review on its Web site.

The grey literature comprises evidence that "is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers" (Grey Literature Network Service, 1999).21 Grey literature sources can include abstracts presented at conferences, unpublished data, government documents, or manufacturer information and can be difficult to locate because these sources are not systematically identified, stored, or indexed (Relevo and Balshem, 2011).22

Footnotes

a

The grey literature comprises evidence that "is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers" (Grey Literature Network Service, 1999).21 Grey literature sources can include abstracts presented at conferences, unpublished data, government documents, or manufacturer information and can be difficult to locate because these sources are not systematically identified, stored, or indexed (Relevo and Balshem, 2011).22

Cover of Allocation of Scarce Resources During Mass Casualty Events
Allocation of Scarce Resources During Mass Casualty Events.
Evidence Reports/Technology Assessments, No. 207.
Timbie JW, Ringel JS, Fox DS, et al.

PubMed Health Blog...

read all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...