6Establishing Metrics to Assess Risk and Capabilities

Developed by public and private public health organizations alike, potential metrics to identify high-risk areas for a mass casualty incident (MCI) in rural settings, transportation or otherwise, are unparalleled in the insights they could provide preparedness planners. Questions of how these metrics are developed and who develops them, as well as current sources of measurement information, are discussed below as potentially central components of an integrated health response. (A summary of suggested tools and resources is provided in Box 6-1.)

Box Icon

BOX 6-1

Suggested Resources That Could Be Leveraged for Assessment of Transportation-Related Road Risks and Response Capabilities. NASEMSO Model Inventory of Emergency Care Elements (MIECE) A method of expressing and comparing risk based on resources

ASSESSMENT MODELS

Dia Gainor, session chair and director of the Idaho State Emergency Medical Services (EMS), detailed two ongoing projects by the National Association of State EMS Officials (NASEMSO) designed to develop mechanisms to assess risk along rural roadways. Such data are currently lacking and are essential to planning activities, Gainor commented.

The Model Inventory of Emergency Care Elements (MIECE) seeks to answer the question, “If a mass casualty incident happened on this stretch of highway, what resources could be expected to be available?” (Martin, 2010). The goal is to develop a method of expressing and comparing risk based on resources. Though not yet adopted by states, MIECE plans to collect data regarding ground and helicopter EMS services, and hospital locations and trauma center designations, among other resources, for each segment of highway, allowing public health officials to anticipate and correct system weaknesses before an MCI occurs. The data collection necessary to populate MIECE will better identify metrics associated with MCI preparedness and response, as well as the channels by which and the personnel to whom that data are available. The primary user of this information would be the state EMS offices. Gainor noted that the data are not intended to create automatic solutions (i.e., for an area where there is not a high count of helicopters, that does not mean that more helicopters is the answer); the information is meant to provide better information to decision makers to develop the policies and plans needed to mitigate the associated risk.

A related NASEMSO project is the Event Response Readiness Assessment (ERRA), which allows localities, regions, and states to self-assess their emergency response capacity benchmarked to specific indicators including, but not limited to: the ability to implement incident command systems (ICS) with national guidelines; the extent of regional and state, public and private, integration in a broad response plan; and multidisciplinary participation in mass casualty exercises (Martin, 2010).

The commonality of ERRA's metrics allows counties to compare themselves to their neighbors, intending greater system integration and stakeholder collaboration as a result. The self-assessment nature of the tool lends universality to its implementation, while its online interactive workshops can provide greater guidance and context to local strategies aimed at improving MCI response.

DATA SOURCES FOR DEVELOPING METRICS TO ASSESS RISKS AND CAPABILITIES

What Departments of Transportation Cannot, and Can, Measure

In 2009 there were about 34,000 highway fatalities, said Kelly Hardy, safety program manager with American Association of State Highway and Transportation Officials (AASHTO). One of the goals of AASHTO is for all state departments of transportation (DOTs) to work to reduce highway fatalities by half in the next 20 years. This cannot be accomplished simply by making roadway improvements, she said.

Data collection on fatal accidents is a challenge. Approximately 60 percent of those fatalities occurred on rural roads, and around half of these on roads that are not owned or maintained by the state DOT. The cities, counties, townships, and other jurisdictions that take care of these roads collect their own crash data, and know what these roads look like and what kind of improvements might need to be made.

One issue for data collection is location-specific coding of crashes; where exactly are these crashes occurring and what does that road look like where that crash happened? An issue specifically related to MCIs is traffic volume. Another question is how often specific types of vehicles travel on that road (e.g., buses, cars).

Hardy referred participants to the highway safety manual that was recently published by AASHTO following 10 years of research by the Transportation Research Board of the National Academies and the Federal Highway Administration (AASHTO, 2010). The manual provides models for predicting the impact of infrastructure changes, for example, how would the installation of rumble strips or a traffic signal at an unsignalized intersection affect crashes in that area?

A national program, though not a part of the federal DOT, AAA Foundation for Traffic Safety's U.S. Road Assessment Program, compiles data on crashes, fatalities, and serious injury to develop risk maps of roadways. A participant mentioned the program to bolster the suggestion that partners for metrics modeling should be sought outside of the public sector as well.

The Knowledge-Skills-Abilities Triad

By definition, an MCI is something that goes beyond the existing resource capacity. Baseline resources are helpful to know, but what matters is the resources that can be brought in when an individual system is extended beyond its capacity. The issue is how can surge capacity be managed from the data system. Greg Mears suggested that the “knowledge, skills, and abilities” approach to assessing a candidate for a job can be applied to surge capacity; knowledge is what they know about the scenario and how they can bring a resource to you; skills are what they have been trained to do, which may be technically oriented; abilities is often based on resources and equipment.

Mears, medical director for the North Carolina office of EMS, explained that in North Carolina systems are pieced together, such as by matching individuals that know something relevant, but have no equipment to be able to apply that skill, with equipment and resources from somewhere else, to create functional units. In a rural area, the ability to piece together components into a whole in a timely fashion is just as important as identifying a whole component. This is something that data systems can help with. For a regionalized approach, it is important to identify where those resources are. Again, communication is key.

Mears pointed out that the dispatch center is one location where a significant amount of information can exist. Through event logs and location information, an understanding of the service area can emerge and may provide some indication of where events are more likely to occur. Mass casualty events are not necessarily unusual, he said. They are often common events on a very large scale. Mears recommended looking for areas where vehicle crashes commonly happen and anticipating how an incident could grow out of proportion if the right conditions came together. EMS data systems and healthcare systems are helpful sources of data from an evaluation perspective, but there is not much information they can provide that is relevant during the active response to the incident.

The SMARTT1 online system in North Carolina is one approach to managing the “knowledge, skills, and abilities” available across the system, Mears said. Through text messaging and other communications, resources can be very quickly located and then coordinated through the system.

Mears recommended the development of regional or state dispatch data systems. The information technology exists for a dispatch registry to operate in real time, he said. Such a system would provide baseline activity information across fire, EMS, and law enforcement, as well as some predictive measures of where there is a risk of events.

Developing Rural-Centric Metrics

Sally Phillips, former director of the Public Health Emergency Preparedness Program at the Agency for Healthcare Research and Quality (AHRQ)2 reviewed some of the unique aspects of incidents in rural states and rural communities that need to be factored into the development of metrics: weather; distance; frequency of incidents; communications gaps; and capabilities and capacities such as transportation, personnel, and facilities. She emphasized that these aspects of MCI response and their impact on patient outcomes differ significantly between rural and urban areas such that the most effective rural MCI planning must be based on rural metrics measurement.

When developing metrics, the first step is to determine the requirements and components of what is to be measured. With regard to mass casualty care, for example, a key component is EMS providers and their relative competency, capabilities, and headcount. Measurable competencies and capabilities could include education, training, and exercise frequency; field skill expansion; medical supervision on site or through telemedicine; after-action debriefing and quality improvement; and safety and security issues for providers (e.g., provisions to ensure physical safety, mental health).

Other components of mass casualty care that could be measured include access to trauma care, whether onsite, after transport, or via telemedicine; triage and treatment protocols; alternative treatment facilities for triage and stabilizing those awaiting transport; and capability and capacity for treating children and special needs populations.

Planning and concept of operations (CONOPS) revisions and improvements is another area that could be measured when considering risk. Measurable components could include the adequacy of transportation assets (e.g., quantity, status, safety); effective use of strike teams and citizen volunteers; ICS knowledge and implementation; and communication knowledge and skills.

Community resiliency is also an important component. The victims on site, who may have to wait 45 minutes until the first ambulance arrives, are really the first responders and often render some of the initial care. What are the capabilities and capacities of victims? Can the public be better prepared through education?

In addition to laying out some of the potential metrics for MCI capabilities and response, Phillips offered some rural low-cost solutions to enhance survivability. The first of which was ensuring communications capabilities and driver qualifications of transport vehicles. Also, storing blankets under each bus seat and additional survival supplies on board (e.g., food, water, flashlights) could prove very useful in an emergency. Similarly, the location of basic medical supplies should be made known to all passengers. On the provider side, efforts similar to those discussed previously in Chapters 3 through 5 were suggested: hold skills workshops, just-in-time training, trauma sabbaticals, and staff exchanges to ensure medical staff is prepared to deal with emergencies.

Phillips directed participants to several databases that, while admittedly are focused on the outcome of EMS interventions, could nonetheless prove useful in developing metrics:

  • The Healthcare Cost and Utilization Project (HCUP) database (http://hcupnet.ahrq.gov)
  • Nationwide Inpatient Sample (NIS)
  • Nationwide Emergency Department Sample (NEDS)
  • State Inpatient Databases (SID)
  • State Emergency Department Databases (SEDD)
  • State Ambulatory Surgery Databases (SASD)3

Several AHRQ publications were also mentioned by participants as useful to the challenge of measurement development including AHRQ publications Recommendations for a National Mass Patient and Evacuee Movement, Regulating, and Tracking System (2009a); Mass Evacuation Transportation Model (2008); Hospital Available Beds for Emergencies and Disasters. A Sustainable Bed Availability Reporting System (2009b); and the Cantrill et al., publication Disaster Alternate Care Facilities: Report and Interactive Tools (2009).

TRAUMA SYSTEM DATA AND METRICS

Charles Mains, trauma medical director for Centura Health Trauma System and medical director at Saint Anthony Central Hospital in Denver, Colorado, began with a brief review of the consequences of shock, reminding participants that multiorgan failure and death may occur several weeks after the initial traumatic insult. Optimal patient outcomes depend on an integrated system of care from prehospital to rehabilitation, he said.

In an effort to develop a fully integrated statewide trauma system, Centura has 19 facilities, 14 of which are designated trauma centers, and all of which are not-for-profit. Combined, they log 300,000 emergency room (ER) visits and 8,000 trauma admissions per year. Centura also provides medical direction to 130 prehospital agencies and averages 4,000 medical flight missions per year, which are dispatched through a centralized flight operations center. When flights are grounded, there are four critical care ground units that can travel to the scene. There are four different modes of communication between the trauma centers and the affiliated facilities. Centura also maintains a centralized trauma registry that currently has data from about 32,000 patients over the past 4 years. Mains said that this makes the Centura system ideally suited to study the metrics of trauma care across a broad region.

Mains explained that the fully integrated system of care incorporates quality processes, best practices, and national benchmarks, as well as an extensive outreach and education program. The system has destination guidelines, patient tracking through a unified medical record system, and coordination with the state trauma system. There is radiology interconnectivity via the Internet so that in-house trauma radiologists can read films in any of the rural facilities and decide which hospital in the system is most appropriate for patients' triage. Using the one-call system, patients are directed to the facility with the resources that best meet their medical needs (which is not always the top-ranked care facility).

For trauma system metrics, the Centura system is benchmarking against national trauma data. They assess individual facility and system risk-adjusted mortality versus injury severity score (ISS) versus probability of survival. They also study preventable death, inappropriate double transport, and transport time to definitive care. Flattening of the second and third peaks in the trimiodal death curve (i.e., deaths occurring hours to weeks after the initial trauma) is a sign of the maturity of a trauma system, Mains said. If both the EMS and the initial hospital are effective at resuscitation, there will be fewer respiratory distress and multiorgan failure deaths in the intensive care units (ICU). Great field capabilities are wasted if the hospital is not prepared to perform critical care at the level needed. Centura's quality initiatives focus on efforts that can make a significant impact on the care of trauma patients, such as resuscitation guidelines, severe head injury guidelines, and geriatric protocols.

Coordinated care, adequate transportation, and planning of facility scope of practice all contribute to improving the outcome of trauma patients in rural areas, Mains concluded. A list of discussed metrics can be found in Box 6-2.

Box Icon

BOX 6-2

Suggested Metrics. Rural and frontier-specific patient care and outcomes data (most current data are based on urban and suburban transport times and facility capabilities that do not necessarily translate to a rural setting) Frequency of incidents

SYSTEMATIC APPROACHES TO METRICS DEVELOPMENT

The CDC and a Civilian Model

Craig Thomas, chief of the Outcome Monitoring and Evaluation Branch in the Division of State and Local Readiness at the CDC provided an overview of metrics development at the CDC. The Division of State and Local Readiness administers the Public Health Emergency Preparedness (PHEP) Cooperative Agreement, which since 2001 has awarded over $8 billion to 62 state, territorial, and local grantees. At approximately $1 billion per year, this is one of the largest federal investments at the CDC, Thomas noted, and the CDC must develop metrics for assessing the degree to which the program is achieving its goals.

In a mass casualty response, the capabilities that the CDC sees as critical include incident management, crisis and emergency risk communication, countermeasures and mitigation (e.g., mass care, fatality management, responder safety and health), and surge management (e.g., medical surge, medical supply management, volunteer management).

Thomas highlighted several challenges to developing meaningful measures, especially for rural settings, starting with the fact that the integration of public health into EMS is relatively recent. In general, public health focuses more on continuous events (e.g., infectious disease outbreaks) than on discrete or acute emergency events (e.g., building collapse). In addition, measurement is hampered by the fact that roles and responsibilities are not always defined, especially for cross-jurisdictional incidents. And while not every service meets the necessary capabilities in the same way (nor is CDC prescribing a specific method to conduct a particular capability) some performance parameters need to be defined. An understaffed workforce is a pervasive issue in public health, more so recently with the economic downturn, and there is variation in core competencies for public health workers. Scarce resources have resulted in insufficient systems, equipment, and supplies. Maintaining and updating existing systems and equipment can be a challenge. Together, Thomas posited that these add up to a limited ability to operate in emergencies.

Steps in Developing Public Health Emergency Preparedness (PHEP) Measures

For its systematic approach to developing PHEP measures, the CDC first defines and describes the program, then applies evaluation tools and methods (e.g., process mapping, logic models) to generate activities that could be measured. As there is no solid evidence base, measures must be based on expert knowledge, experience, and published literature. The next step is to develop data reporting and analysis plans, including how data will be collected, submitted, managed, analyzed, and reported. Thomas said that the CDC builds evaluation capacity into the grant awards as it must be able to report back to Congress to justify continued funding of the program. As a result of the Pandemic and All-Hazards Preparedness Act, there are legislative mandates that require benchmarks be met. Failure to meet any benchmarks will result in funding cuts, a challenge of primary concern to awardees, Thomas said.

Defining Measure Parameters

In selecting points to measure, the CDC must consider what is core to public health versus what is under the control of EMS or healthcare delivery. There is also need for measures between and among systems, to gain a better understanding of where they fit and how they work together. Measures must be scalable. The system is designed to collect data on routine events that, Thomas explained, serve as proxies for how the public health system might function in public health emergency response. Finally, potential bottlenecks that affect timely delivery of services are identified.

Many of the measures are time based, assuming that time is a proxy for quality of response. One could measure, for example, time to notify preidentified staff to fill public health agency incident management roles, or time to complete a draft of an after-action report/improvement plan. Other parameters, such as quality of the response and whether the right decisions were made, are more difficult to measure. There are not much data to guide such measures, but the CDC is addressing this as the program moves forward, Thomas said.

The Department of Homeland Security and the Joint Combat Casualty Care System

The core science and technology mission of the Department of Homeland Security (DHS) is to strengthen America's security and resiliency by providing innovative science and technology solutions for the Homeland Security enterprise, explained James Grove, regional director of the Interagency and First Responder Programs Division in the DHS Office of Science and Technology. One of the approaches to achieving this mission is a First Responder Capstone Integrated Product Team (IPT) established in 2009. After identifying needs-based input from first responders, the program makes investments in technologies and solutions that could potentially fill the gaps identified.

Grove highlighted several methods of evaluating concepts and metrics. Usually, in an emergency management community, he explained, the process of fixing problems starts with developing concepts. Then standard operating procedures are written, and a tabletop exercise might be conducted before moving the concept into a field environment to see if it works. However, when so much time has already been invested, most of the energy is spent on making sure the concept works, and training to ensure it is successful. There is no room for experimentation. Grove suggested that there is an opportunity to leverage the United States Joint Forces Command's Joint Concept Technology Demonstrations (JCTD). This approach could potentially be used to address development and metrics for rural EMS response.

One JCTD evaluated the Joint Combat Casualty Care System, and Grove pointed out several parallels between some of the desired capabilities of the combat care system and rural EMS: efficient management of low-density, high-demand field medical personnel and evacuation assets; application of medical care to the most critical casualties while monitoring and remotely caring for others; and facilitated critical medical care to forces in denied or remote areas unreachable by evacuation assets in the short term. There are metrics that will be developed for the war fighter paramedic. The JCTD approach to development and metrics is worth looking at, Grove suggested. Some of the technologies that have come out of the JCTD may be useful for EMS in a rural environment, such as a handheld Motorola device that responds to voice commands.

A participant suggested that not just a military model, but those from the Data-Driven Approaches to Crime and Traffic Safety that is funded through the National Highway Traffic Safety Administration (NHTSA) and the Department of Justice, might similarly prove an adept comparison. The program studies crime and traffic crash data to determine the most effective deployment of law enforcement resources.

THEMES IDENTIFIED BY WORKSHOP PARTICIPANTS

A primary challenge for assessment of preparedness capabilities and risk is a lack of an identifiable evidence base upon which to develop measures and establish metrics. Also where it does exist, available data are based on urban and suburban conditions which do not necessarily translate to rural and frontier settings. Participants discussed a variety of existing tools and research projects that that could be leveraged for assessment of transportation-related road risks, for example, roadway risk maps; models for predicting the impact of infrastructure changes; a method of expressing and comparing risk based on resources; and approaches to rapidly assess new concepts and technologies. Examples of data sources that might be useful in developing metrics were suggested. A variety of questions will need to be addressed, such as whether there are physical resources that could be measured as proxies for response capacity of an emergency care system, and how exercises, planning, integration, collaboration with nontraditional partners, and other activities should factor into assessments.

Footnotes

1

State Medical Asset Resource Tracking Tool, described by Alson in Chapter 6.

2

Sally Phillips currently serves as the deputy director of the Health Threats Resilience Division in the office of Health Affairs, DHS.

3

HCUP: http://hcupnet​.ahrq.gov. NIS: http://www​.hcup-us.ahrq​.gov/db/nation/nis​/nisdbdocumentation.jsp. NEDS: http://www​.hcup-us.ahrq​.gov/nedsoverview.jsp. SID: http://www​.hcup-us.ahrq​.gov/sidoverview.jsp. SEDD: http://www​.hcup-us.ahrq​.gov/seddoverview.jsp. SASD: http://www​.hcup-us.ahrq​.gov/sasdoverview.jsp.