NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Gliklich RE, Dreyer NA, Leavy MB, editors. Registries for Evaluating Patient Outcomes: A User's Guide [Internet]. 3rd edition. Rockville (MD): Agency for Healthcare Research and Quality (US); 2014 Apr.

Cover of Registries for Evaluating Patient Outcomes

Registries for Evaluating Patient Outcomes: A User's Guide [Internet]. 3rd edition.

Show details

22Quality Improvement Registries

1. Introduction

Quality assessment/improvement registries (QI registries) seek to use systematic data collection and other tools to improve quality of care. While much of the information contained in the other chapters of this document applies to QI registries, these types of registries face unique challenges in the planning, design, and operation phases. The purpose of this chapter is to describe the unique considerations related to QI registries. Case Examples 53, 54, 55, 56, and 57 offer some descriptions of quality improvement registries.

While QI registries may have many purposes, at least one purpose is quality improvement. These registries generally fall into two categories: registries of patients exposed to particular health services (e.g., procedure registry, hospitalization registry) around a relatively short period of time (i.e., an event); and those with a disease/condition tracked over time through multiple provider encounters and/or multiple health services. An important commonality is that one exposure of interest is to health care providers/health care systems. These registries exist at the local, regional, national, and international levels.

QI registries are further distinguished from other types of registries by the tools that are used in conjunction with the systematic collection of data to improve quality at the population and individual patient levels. QI registries leverage the data about the individual patient or population to improve care in a variety of ways. Examples of tools that facilitate data use for care improvement include patient lists, decision support tools (typically based on clinical practice guidelines), automated notifications, communication tools (e.g., patient educational materials), and patient- and population-level reporting systems. For example, a diabetes registry managed by a single institution might provide a listing of all patients in a provider's practice who have diabetes and who are due for a clinical examination or other assessments. Decision support tools exist that assess the structured patient data provided to the registry and display recommendations for care based on evidence-based guidelines. This is a well-reported feature of the American Heart Association's Get With The Guidelines® registries.1 Certain registry tools will automatically notify a provider if the patient is due for a test, exam, or other milestone. Some tools will even send notifications directly to patients indicating that they are due for a treatment such as a flu vaccination. Reports are a key part of quality improvement. These range from reports on individual patients, such as a longitudinal report tracking a key patient outcome, to reports on the population under care by a provider or group of providers, either alone or in comparison to others (at the local, regional, or national level). Examples of the latter reports include those that measure processes of care (e.g., whether specific care was delivered to appropriate patients at the appropriate time) and those that measure outcomes of care (e.g., average Oswestry score results for patients undergoing particular spine procedures, compared with similar providers).

QI registries can further support improved quality of care by giving providers and their patients more detailed information based on the aggregate experience of other patients in the registry. This can include both general information on the natural history of the disease process from the accumulated experience of other patients in the registry and more individual-patient-level information on specific risk calculators that might help guide treatment decisions. Registries that produce patient-specific predictors of short- and long-term outcomes (which can inform patients about themselves) as well as provider-specific outcomes benchmarked against national data (which can inform patients about the experience and outcomes of their providers) can be the basis of both transparent and shared decisionmaking between patients and their providers.

In addition to these examples, there are tools that are neither electronic nor necessarily provided through the registry systems. Non-electronic examples range from internal rounds to review registry results and make action plans, to quality-focused national or regional meetings that review treatment gaps identified from the registry data and teach solutions, to printed posters and cards or other reminders that display the key evidence-based recommendations that are measured in the registry. Further, even electronic tools need not be delivered through the registry systems themselves. While in many cases the registries do provide the functionality described above, the same purpose is served when an electronic health record (EHR) provides access to decision support relevant to the goals of the patient registry. In other words, what characterizes QI registries is not the embedding of the tools in the registry but the use of the tools by the providers who participate in the registry to improve the care they provide, and the use of the registry to measure that improvement.

2. Planning

As described in Chapter 2, developing a registry starts with thoughtful planning and goal setting. Planning for a QI registry follows most of the steps outlined in Chapter 2, with some noteworthy differences and additions. A first step in planning is identifying key stakeholders. Similar to other types of registries, regional and national QI registries benefit from broad stakeholder representation, which is necessary but not sufficient for success. In QI registries, the provider needs to be engaged and active, as the program is not simply supporting a surveillance function or providing a descriptive or analytic function, but is often focused on patient and/or provider behavior change. In many QI registries, these active providers are termed “champions” and are vital for success, particularly early in development.2 At the local level, the champions are typically the ones asking for the registry and almost by definition are engaged. Selecting stakeholders locally is generally focused on involving individuals with direct impact on care or those that can support the registry with information, systems, or labor. Yet, the common theme for both local and national QI registries is that the local champions must be successful in actively engaging their colleagues in order for the program to go beyond an “early adopter” stage and be sustainable within any local organization. Once a registry matures, other incentives may drive participation (e.g., recognition, competition, financial rewards, regulatory requirements), but the role of the champion in the early phases cannot be overstated.

A second major difference between planning a QI registry and planning other types of registries is the funding model. QI registries use a wide variety of funding models. For example, a regional or national registry may be funded entirely by fees paid by participating providers or hospitals. Alternately, the registry may supplement participation fees with funding from professional associations, specialty societies, industry, foundations, or government agencies. Some QI registries may not charge a participation fee and may receive all of their funding from other organizations. Local QI registries that operate within a single institution may receive all of their funding from the institution or from research grants. The funding model used by a QI registry largely depends on the goals of the registry and the stakeholders in the specific disease area.

Next, in order for a QI registry to meet its goal of improving care, it must provide actionable information for providers and/or participants to be able to modify their behaviors, processes, or systems of care. Actionable information can be provided in the form of patient outcomes measures (e.g., mortality, functional outcomes post discharge) or process of care or quality measures (e.g., compliance with clinical guidelines). While the ultimate goal of a QI registry is to improve patient outcomes by improving quality of care, it is not always possible for a QI registry to focus on patient outcome measures. In some cases, outcome measures may not exist in the disease area of interest, or the measures may require data collection over a longer period than is feasible in the registry. As a result, QI registries have often focused on process of care measures or quality measures. While this has been criticized as less important than focusing on measures of patient outcomes, it should be noted that quality measures are generally developed from evidence-based guidelines and emphasize interventions that have been shown to improve long-term outcomes, increasingly recognized through standardized processes (e.g., National Quality Forum), and are inherently actionable. Patient outcome measures, on the other hand do not yet have consensus across many conditions, may be influenced by systematic loss to followup, and may be expensive and difficult to collect. Furthermore, long-term outcomes are generally not readily available for rapid-cycle initiatives and may be too distant from the time when the care is delivered to support effective behavior change. Nonetheless, there has been an increasing focus in recent years on including outcome measures instead of or in addition to process of care measures in QI registries. This shift is driven in part by research documenting the lack of correlation between process measures and patient outcomes3-5 and by arguments that health care value is best defined by patient outcomes, not processes of care.6

Selecting measures for QI registries typically requires balancing the intent to be relevant and actionable with the desire to meet other needs for providers, for example by reporting quality measures to different parties (e.g., accreditation organizations, payers). Frequently, this is further complicated by the lack of harmonization between those measure requirements even in the same patient populations.7 Even when there is agreement on the type of intervention to be measured and how the intervention is defined there still may be variability in how the cases that populate the denominator are selected (e.g., by clinical diagnosis, by ICD-9 classification, by CPT codes). In the planning stages of a QI registry, it is useful to consider key parameters for selecting measures. The National Quality Forum offers the following four criteria for measure endorsement, which also apply to measure selection:

  1. Important to measure and report, to keep our focus on priority areas, where the evidence is highest that measurement can have a positive impact on health care quality.
  2. Scientifically acceptable, so that the measure when implemented will produce consistent (reliable) and credible (valid) results about the quality of care.
  3. Useable and relevant, to ensure that intended users—consumers, purchasers, providers, and policymakers—can understand the results of the measure and are likely to find them useful for quality improvement and decisionmaking.
  4. Feasible to collect with data that can be readily available for measurement and retrievable without undue burden.8

The National Priorities Partnership9 and the Measure Applications Partnership,10 both of which grew out of the National Quality Forum and provide support to the U.S. Department of Health and Human Services on issues related to quality initiatives and performance measurement, also offer useful criteria to consider when selecting measures.

One approach to consider in selecting measures is to perform a cross-sectional assessment using the proposed panel of measures to identify the largest gaps between what is recommended in evidence-based guidelines or expected from the literature and what is actually done (“treatment gaps”). The early phase of the registry can then focus on those measures with the most significant gaps and for which there is a clear agreement among practicing physicians that the measure reflects appropriate care. The planning and development process should move from selecting measures to determining which data elements are needed to produce those measures (see Section 4 below). Measures should ideally be introduced with idealized populations of patients in the denominator for whom there is no debate about the appropriateness of the intervention. This may help reduce barriers to implementation that are due to physician resistance based on concerns about appropriateness for individual patients.

Once the measures and related data elements have been selected pilot testing may be useful to assess the feasibility and burden of participation. Pilot testing may identify issues with the availability of some data elements, inconsistency in definition of data elements across sites, or barriers to participation, such as burden of collecting the data or disagreement about how exclusion criteria are constructed when put into practice. In order for the registry to be successful, participants must find the information provided by the registry useful for measuring and then modifying their behaviors, processes, or systems of care. Pilot testing may enable the registry to improve the content or delivery of reports or other tools prior to the large-scale launch of the program. If pilot testing is included in the plans for a QI registry, the timeline should allow for subsequent revisions to the registry based on the results of the pilot testing.

Change management is also an important consideration in planning a QI registry. QI registries need to be nimble in order to adapt to two continual sources of change. First, new evidence comes forward that changes the way care should be managed and it is incumbent on the registry owner to make changes so that the registry is both current and relevant. In many registries, such as American Heart Association's Get With The Guidelines Stroke program and the American Society of Clinical Oncologists' QOPI registry, this process occurs more than once a year. Second providers participating in registries manage what they measure, and over time, measures can be rotated in or out of the panel so that attention is focused where it is most critical to overcome a continuing treatment gap or performance deficiency This requires that the registry have a standing governance body to make changes over time, a system of data collection and reporting flexible enough to rapidly incorporate changes with minimal or no disruption to participants, and sufficient resources to communicate with and train participants on the changes. The governance structure should include experts in the area of measurement science as well as in the scientific content. The registry system also needs to continuously respond to additional (not necessarily harmonized) demands for transmitting quality measures to other parties (e.g., Physician Quality Reporting System, Meaningful Use reporting, Bridges to Excellence, State department of public health requirements). From a planning standpoint, QI registries should expect ongoing changes to the registry and plan for the resources required to support the changes. While this complicates the use of registry data for research purposes, it is vital that the registry always be perceived first as a tool for improving outcomes. Therefore, whenever changes are made to definitions, elements, or measures, these need to be carefully tracked so that analyses or external reporting of adherence may take these into account if they span time periods in which changes occurred.

3. Legal and Institutional Review Board Issues

As discussed in Chapters 7, 8, and 9, registries navigate a complex sea of legal and regulatory requirements depending on the status of the developer, the purpose of the registry, whether or not identifiable information is collected the geographic locations in which the data are collected and the geographic locations in which the data are stored (State laws, international laws, etc.). QI registries face unique challenges in that many institutions' legal departments and institutional review boards (IRBs) may have less familiarity with registries for quality improvement, and even for experts, the distinction between a quality improvement activity and research may be unclear.11-14 Some research has shown that IRBs differ widely in how they differentiate research and quality improvement activities.15 What is clear is that IRB review and in particular, informed consent requirements, may not only add burden to the registry but may create biased enrollment that may in turn affect the veracity of the measures being reported.16 Potential limitations of the IRB process have been identified in other reports, including potential problems for comparative effectiveness research. These issues will not be reviewed here.

For QI registries, which generally fit under the HIPAA (Health Insurance Portability and Accountability Act) definition of health care operations, the issues that lead to complexity include whether or not the registry includes research as a primary purpose or any purpose, whether the institutions or practices fall under the Common Rule, and whether informed consent is needed. The Common Rule is discussed in the Chapter 7, and informed consent and quality improvement activities are discussed in Chapter 8. To assist in determining whether a quality improvement activity qualifies as research, the Office for Human Research Protections (OHRP) provides information in the form of a “Frequently Asked Questions” Web page.17 OHRP notes that most quality improvement activities are not considered research and therefore are not subject to the regulations for the protection of human subjects. However, some quality improvement activities are considered research, and the regulations do apply in those cases. To help determine if a quality improvement activity constitutes research, OHRP suggests addressing the following four questions, in order:

  1. Does the activity involve research? (45 CFR 46.102(d))
  2. Does the research activity involve human subjects? (45 CFR 46.102(f))
  3. Does the human subjects research qualify for an exemption? (45 CFR 46.101(b))

Is the nonexempt human subjects research conducted or supported by HHS or otherwise covered by an applicable FWA [Federalwide Assurance] approved by OHRP?18

In addressing these questions, it is important to note the definition of “research” under 45 CFR 46.102(d). “Research” is defined as “…a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge….” OHRP may not view quality improvement activities as “research” under this definition, and provides some examples of the types of activities that are not considered research.19 It is also important to note the definition of “human subjects” under 45 CFR 46.101(b). “Human subject” is defined as “a living individual about whom an investigator (whether professional or student) conducting research obtains (1) Data through intervention or interaction with the individual, or (2) Identifiable private information.” Again, OHRP may not view quality improvement activities as human subjects research if the data are not considered identifiable private information or were not collected through interaction or intervention with the individual patient (e.g., if the data were abstracted from a medical record).20

These questions provide some information helpful in determining whether a quality improvement registry is subject to the protection of human subjects regulations, but some researchers and IRBs have still reported difficulty in this area.11, 12 Remaining questions include, for example, if the registry includes multiple sites, is separate IRB approval from every institution required? If the registry is considered human subjects research, in what circumstances is informed consent required?

There have been several recent calls to refine and streamline the IRB process for QI registries,11 and some of this work is advancing. Recently, OHRP has proposed revisions to the Common Rule that would address some of these issues; the proposed changes were posted for a public comment period which closed in October 2011.21 Without some changes and greater clarity around existing regulations as they relate to QI registries, it will be difficult for some registries to be successful.

4. Design

Designing a QI registry presents several challenges, particularly when multiple stakeholders are involved. Staying focused on the registry's key purposes, limiting respondent burden, and being able to make use of all of the data collected are practical considerations in developing programs. First, the type of QI registry needs to be determined. Is the goal to improve the quality of patients with a disease or patients presenting for a singular event in the course of their disease? For example, a QI registry in cardiovascular disease will be different (i.e., with respect to sampling, endpoints, and measures) if it focuses on patients with coronary artery disease, versus patients with a hospitalization for acute coronary syndrome. In the first example, the registry may need to track patients over time and across different providers; reminder tools may be needed to prompt followup visits or laboratory tests. In the second example, the registry may need to collect detailed data at a single point in time on a large volume of patients.

Second, QI registries that collect data within a single institution differ from those that collect data at multiple institutions regionally or nationally. Single-institution registries, for example, may be designed to fit within specific workflows at the institution or to integrate with one EHR system. They may reflect the specific needs of that institution in terms of addressing treatment gaps, and they may be able to obtain participant buy-in for reporting plans (e.g., for unblinded reporting). Regional or national level registries, on the other hand, must be developed to fit seamlessly into multiple different workflows. These registries must address common treatment gaps that will be relevant to many institutions, and they must develop approaches to reporting that are acceptable to all participants.

The appropriate level of analysis and reporting is an important consideration for designers of QI registries. Reports may provide data at the individual patient, provider, or institution level, or they may provide aggregate data on groups of patients, providers, and institutions. The aggregate groups may be based on similar characteristics (e.g., disease state, hospitals of a similar size), geography, or other factors. The registry may also provide reports to the registry participants, to patients, or to the public. Reports may be unblinded (e.g., the provider is identifiable) or blinded, and they may be provided through the registry or through other means. In designing the registry, consideration should be given to what types of reports will be most relevant for achieving the registry's goals, what types of reports will be acceptable to participants, and how those reports should be presented and delivered. Reporting considerations are discussed further in Section 9.

As described above, there are many challenges in selecting existing measures or designing and testing new measures. Once measures have been selected, the “core data” can be determined. Since QI registries are part of health care operations, it is critical that they do not overly interfere with the efficiency of those operations, and therefore the data collection must be limited to those data elements that are essential for achieving the registry's purpose. One approach to establishing the core data set is to first identify the outcomes or measures of interest and then work backwards to the minimal data set, adding those elements required for risk adjustment or relevant subgroup analyses. For example, the inclusion and exclusion criteria for a measure, as well as information used to group patients into numerator and denominator groups, can be translated into data elements for the registry. Case Example 53 describes this process for the Get With The Guidelines Stroke program. Depending on the goals of the registry, the core data set may also need to align with data collection requirements for other quality reporting programs.

Many QI registries have gone further by establishing a core data set and an enhanced data set for participating groups that are ready to extend the range of their measurements. This tiered model can be very effective in appealing to a broad range of practices or institutions. Examples include the Get With The Guidelines program, which allows hospitals to select performance measures or both performance and quality measures, and the American College of Surgeons' National Surgical Quality Improvement Program, which has a core data set and the ability to add targeted procedure modules.

QI registries also may need to develop sampling strategies during the design phase. The goal of sampling in QI registries is to provide representativeness (i.e., to ensure that the registry is reflective of the patients treated by the physician or practice) and precision (i.e., to enroll a sufficient sample size to provide reasonable intervals around the metrics generated from each practitioner/practice to be useful in before/after or benchmarking comparisons). Sampling frames need to balance simplicity with sustainability. For example, an all-comers model is easy to implement but can be difficult to sustain, particularly if the registry uses longitudinal followup. For example, an orthopedic registry maintained by a major U.S. center sought to enroll all patients presenting for hip and knee procedures. Since the center performed several thousand procedures each year, within a few short years the numbers of followups being performed climbed to the tens of thousands. This was both expensive and likely unsustainable. On the other hand, a sampling frame can be difficult and confusing. While a sampling frame can be readily administered in a retrospective chart review, it is much more difficult to do so in a prospective registry. Some approaches to this issue have included selecting specific days or weeks in a month for patient enrollment. But, if these frames are known to the practitioners, they can be “gamed,” and auditing may be necessary to determine if there are sampling inconsistencies. Pilot testing can be useful for assessing the pace of patient enrollment and the feasibility of the sampling frame. Ongoing assessments may also be needed to ensure that the sampling frame is yielding a representative population.

An additional implication when considering how to implement a sampling strategy is that for QI registries in which concurrent case ascertainment and intervention is involved only those patients who are sampled may benefit from real-time QI intervention and decision support. In these circumstances, patients who are not sampled are also less likely to receive the best care. This disparity may only increase as EHR-enabled decision support becomes increasingly sophisticated and commonplace.

5. Operational Considerations

As with most registries, the major cost for participants in a QI registry is data collection and entry rather than the cost of the data entry platform or participation fees. Because QI registries are designed to fit within existing health care operations, many of the data elements collected in these registries are already being collected for other purposes (e.g., claims, medical records, other quality reporting programs). QI registries are often managed by clinical staff who are less familiar with clinical research and who must fit registry data collection into their daily routines. Both of these factors make integration with existing health information technology systems or other data collection programs attractive options for some QI registries. Integration may take many forms. For example, data from billing systems may be extracted to assist with identifying patients or to pull in basic information about them. EHRs may contain a large amount of the data needed for the registry, and integration with the EHR system could substantially reduce the data collection burden on sites. However, integration with EHRs can be complex, particularly for registries at the regional or national level that need to extract data from multiple systems. A critical challenge is that the attribution of clinical diagnoses in the context of routine patient care is often not consistent with the strict coding criteria for registries, making integration with EHR systems more complex. Chapter 15 discusses integration of registries with EHR systems. Another alternative for some disease areas is to integrate data collection for the registry with data collection for other quality initiatives (e.g., Joint Commission, Centers for Medicare & Medicaid Services). Typically, these types of integration can only provide some of the necessary data; participants must collect and enter additional data to complete the CRFs.

The burden of data collection is an important factor in participant recruitment and retention. Much of the recruitment and retention discussion in Chapter 10 applies to QI registries. However, one area in which QI registries differ from other types of registries is in the motivations for participation. Sites may participate in other registries because of interest in the research question or as part of mandated participation for State or Federal payment or regulatory requirements. When participation is for research purposes, they may hope to connect with other providers treating similar patients or contribute to knowledge in this area. In contrast to registries designed for other purposes, participants in QI registries expect to use the registry data and tools to effect change within their organizations. Participation in a QI registry and related improvement activities can require significant time and resources, and incentives for participation must be tailored to the needs of the participants. For example, recognition programs, support for QI activities, QI tools, and benchmarking reports may all be attractive incentives for participants. In addition, tiered programs, as noted above, can be an effective approach to encouraging participation from a wide variety of practice or institution types. Understanding the clinical background of the stakeholders (e.g., nurses, physicians, allied health practitioners, and quality improvement professionals) and their interest in the program is critical to designing appropriate incentives for participation.

6. Quality Improvement Tools

As described above, QI tools are a unique and central component of QI registries Generally, QI tools are designed to meet one of two goals: care delivery and coordination or population measurement. Care delivery and coordination tools aim to improve care at the individual patient level, while population measurement tools track activity at the population level, with the goal of assessing overall quality improvement and identifying areas for future improvement activities. For example, a report may be used to track an institution's performance on key measures over time and compared with other similar institutions. These types of reports can be used to demonstrate both initial and sustained improvements. Table 22–1 summarizes some common types of QI tools in these two categories and describes their uses.

Table 22–1. Common quality improvement tools.

Table 22–1

Common quality improvement tools.

QI registries may incorporate various tools, depending on the needs of their participants and the goals of the registry. Table 22–2 below describes the types of functionalities that have been implemented in three different registries—two at the national level and one at the regional level.

Table 22–2. Quality improvement tools implemented in three registries.

Table 22–2

Quality improvement tools implemented in three registries.

7. Quality Assurance

In addition to developing data elements and QI tools, QI registries must pay careful attention to quality assurance issues. Quality assurance, which is covered in Chapter 11, Section 3, is important for any registry to ensure that appropriate patients are being enrolled and that the data being collected are accurate. Data quality issues in registries may result from inadequate training, incomplete case identification or sampling, misunderstanding or misapplication of inclusion/exclusion criteria, or misinterpretation of data elements. Quality assurance activities can help to identify these types of issues and improve the overall quality of the registry data. QI registries can use quality assurance activities to address these common issues, but they must also be alert to data quality issues that are unique to QI registries. Unlike other registries, many QI registries are linked to economic incentives, such as licensure or access to patients, incentive payments, or recognition or certification. These are strong motivators for participation in the registry, but they may also lead to issues with data quality. In particular, “cherry picking,” which refers to the nonrandom selection of patients so that those patients with the best outcomes are enrolled in the registry, is a concern for QI registries. In addition, whenever data are being abstracted from source documents by hand and then entered manually into electronic data entry systems, there is a risk of typographical errors, errors in unit conversions (e.g., 12-hour to military time, milligrams to grams). Automated systems for error checking can reduce the risk of errors being entered into the registry when range checks and valid data formats are built into the data capture platform.

Auditing is one approach to quality assurance for QI registries. Auditing may involve onsite audits, in which a trained individual reviews registry data against source documents, or remote audits, in which the source documents are sent to a central location for review against the registry data. Because auditing all sites and all patients is cost-prohibitive, registries may audit a percentage of sites and/or a percentage of patients. QI registries should determine if they will audit data, and, if so, how they will conduct the audits. A risk-based approach may be useful for developing an auditing plan. In a risk-based approach, the registry assesses the risk for intentional error in data entry or patient selection. Registries that may have an increased risk of intentional error include mandatory registries, registries with public reporting, or registries linked to economic incentives. Registries with an increased risk may decide to pursue more rigorous auditing programs than registries with a lower risk. For example, a voluntary registry with confidential reporting may elect to do a remote audit of a small percentage of sites and patients each year. A registry with public reporting linked to patient access, on the other hand may audit a larger number of sites and patients each year, with a particular focus on key outcomes included in the publicly reported measures.

Questions to consider when developing a quality assurance plan involving auditing include—

  • What percentage of sites should be audited each year?
  • What percentage of data should be audited (all data elements for a sample of patients or only key data elements for performance measures)?
  • How should sites be selected for auditing (random, targeted etc.)?
  • Should audits be conducted on site or remotely?
  • What constitutes passing an audit?

Depending on the purpose of the registry, quality assurance plans may also address issues with missing data, for example—

  • What percentage of missing data is expected?
  • Are data missing at random?
  • What lost-to-followup rate is anticipated?
  • Are certain subgroups of patients more likely to be lost to followup?

Lastly, quality assurance plans must consider how to address data quality issues. Audits and other quality assurance activities may identify problem areas in the registry data set. In some cases, such as when the problem is isolated to one or two sites, additional training may resolve the issue. In other cases, such as when the issue is occurring at multiple sites, data elements, documentation, or study procedures may need to be modified. In rare instances, quality assurance activities may identify significant performance issues at an individual site. The issues could be intentional (e.g., cherry picking) or unintentional (e.g., data entry errors). The registry should have a plan in place for addressing these types of issues.

8. Analytical Considerations

While registries are powerful tools for understanding and improving quality of care, several analytical issues need to be considered. In general, the observational design of registries requires careful consideration of potential sources of bias and confounding that exist due to the nonrandomization of treatments or other sources. These sources of bias and confounding can threaten the validity of findings. Fortunately, the problems associated with observational study designs are well known, and a number of analytical strategies are available for producing robust analyses. Despite the many tools to handle analytical problems, limitations due to observational design, structure of data, measured and unmeasured confounding, and missing data should be readily acknowledged. Below are brief descriptions of several problems to consider when analyzing QI registry data, along with indications of how investigators commonly address these problems.

Observational designs used in registries offer the ability to study large cohorts of patients, and allow for careful description of patterns of care or variations in practice compared with what is considered appropriate or best care. While not an explicit intention, registries are often used to evaluate an effect of a treatment or intervention. The lack of randomization in registries, which limits causal inferences, is an important consideration. For example, in a randomized trial, a treatment or intervention can be evaluated for efficacy because different treatment options have an equal chance of being assigned. Another important characteristic observational studies may lack is an even chance of a patient actually receiving a treatment. In a randomized trial, subjects meet a set of inclusion criteria and therefore have an equal chance of receiving a given treatment. However, a registry likely has some patients with no chance of receiving a treatment. As a result, some inferences cannot be generalized across all patients in the registry.

An inherent but commonly ignored issue is the structure of health or registry data. Namely physicians manage patients with routine processes, and physicians practice within hospitals or other settings that also share directly or indirectly common approaches. These clusters or “hierarchical” relationships within the data may influence results if ignored. For example, for a given hospital, a type of procedure may be preferred due to similar training experiences from surgeons. Common processes or patient selections are also more likely within one hospital compared with another hospital. These observations form a cluster and cannot be assumed to be independent. Without accounting for the clustering of care, incorrect conclusions could be made. Models that deal with these types of clustered data, often referred to as hierarchical models, can address this problem. These models may also be described as multilevel, mixed or random-effects models. The exact approach depends on the main goal of an analysis, but typically includes fixed effects, which have a limited number of possible values, and random effects, which represent a sample of elements drawn from a larger population of effects. Thus, a multilevel analysis allows incorporation of variables measured at different levels of the hierarchy, and accounts for the attribute that outcomes of different patients under the care of a single physician or within the same hospital are correlated.

Adequate sample size for research questions is also an important consideration. In general, registries allow large cohorts of patients to be enrolled but, depending on the question, sample sizes may be highly restricted (e.g., in the case of extremely rare exposures or outcomes). For example, a comparative effectiveness research question may address anticoagulation in patients with atrial fibrillation. As the analysis population is defined based on eligibility criteria, including whether patients are naïve to the therapy of interest, sample sizes with the exposure may become extremely small. Likewise, an outcome of angioedema may be extremely rare, and if being evaluated with a new therapeutic, both the exposure and outcome may be too small of sample to fully evaluate. Thus, careful attention to the likely exposure population after establishing eligibility criteria as well as the likely number of events or outcomes of interest is extremely important. In cases where sample sizes become small, it is important to determine whether adequate power exists to reject the null hypothesis.

Confounding is a frequent challenge for observational studies, and a variety of analytical techniques can be employed to account for this problem. When a characteristic correlates with both the exposure of interest and the outcome of interest, it is important to account for the relationship. For example, age is often related to mortality and may also be related to use of a given process. In a sufficiently large clinical trial, age generally is balanced between those with and without the exposure or intervention. However, in an observational study, the confounding factor of age needs to be addressed through risk adjustment. Most studies will use regression models to account for observed confounders and adjust for outcome comparisons. Others may use matching or stratification techniques to adjust for the imbalance in important characteristics associated with the outcome. Finally, another approach being used more frequently is the use of propensity scores that take a set of confounders and reduce them into a single balancing score that can be used to compare outcomes within different groups.

As QI registries have evolved an important attribute is defining eligibility for a process measure. The denominator for patients eligible for a process measure should be carefully defined based on clinical criteria, with those with a contraindication for a process excluded. The definition of eligibility for a process measure is critical for accurate profiling of hospitals and health care providers. Without such careful, clear definitions, it would be challenging to benchmark sites by performance.

With any registry or research study, data completeness needs to be considered when assessing the quality of the study. Reasons for missing data vary depending on the study or data collection efforts. For many registries, data completeness depends on what is routinely available in the medical record. Missing data may be considered ignorable if the characteristics associated with the missingness are already observable and therefore included in analysis. Other missing data may not ignorable, either because of their importance or because the missingness cannot be explained by other characteristics. In these cases, methods for addressing the missingness need to be considered. Various options for handling the degree of missing data including discarding data, using data conveniently available, or imputing data with either simple methods (i.e., mean) or through multiple imputation methods.

9. Reporting to Providers and the Public

An important component of quality improvement registries is the reporting of information to participants, and, in some cases, to the public. The relatively recent origin of clinical data registries was directly related to early public reporting initiatives by the Federal Government. Shortly after the 1986 publication of unadjusted mortality rates by the Health Care Financing Administration (HCFA), the predecessor of Centers for Medicare & Medicaid Services, a number of states (e.g., the New York Cardiac Surgery Reporting System),22, 23 regions (e.g., Northern New England Cardiovascular Disease Study Group, or NNE),24, 25 government agencies (e.g., the Veteran's Administration),26-28 and professional organizations (e.g., Society of Thoracic Surgeons)29-31 developed clinical data registries. Many of these focused on cardiac surgery. The surgery's index procedure, coronary artery bypass grafting (CABG), is the most frequently performed of all major operations; it is expensive; and it has well-defined adverse endpoints.

Registry developers recognized that the HCFA initiative had ushered in a new era of health care transparency and accountability. However, its methodology did not accurately characterize provider performance because it used claims data and failed to adjust for preoperative patient severity.32 Clinical registries, and the risk-adjusted analyses derived from them, were designed to address these deficiencies. States such as New York, Pennsylvania, New Jersey, California, and Massachusetts developed public report cards for consumers, while professional organizations and regional collaborations used registry data to confidentially feed results back to providers and to develop evidence-based best practice initiatives.33, 34

The impact of public reporting on health care quality remains uncertain. One randomized trial demonstrated that heart attack survival improved with public reporting,35 and there is evidence that low-performing hospitals are more likely to initiate quality improvement initiatives in a public reporting environment.36 However, a comprehensive review37 found generally weak evidence for the association between public reporting and quality improvement, with the possible exception of cardiac surgery, where results improved significantly after the initial publication of report cards in New York two decades ago.23, 38, 39 Some studies have questioned whether this improvement was the direct result of public reporting, as contiguous areas without public reporting also experienced declining mortality rates.40 Similar improvements have been achieved with completely confidential feedback or regional collaboration in northern New England41 and in Ontario.42 Thus, there appear to be many effective ways to improve health care quality—public reporting, confidential provider feedback, professional collaborations, state regulatory oversight—but the common denominator among them is a formal system for collecting and analyzing accurate, credible data,43 such as registries provide.

Public reporting should theoretically affect consumer choice of providers and redirect market share to higher performers. However, empirical data failed to demonstrate this following the HCFA hospital mortality rate publications,44 and CABG report cards had no substantial effect on referral patterns or market share of high and low performing hospitals in New York45, 46 or Pennsylvania.47, 48 Studies suggest numerous explanations for these findings, including lack of consumer awareness of and access to report cards; the multiplicity of report cards; difficulty in interpreting performance reports; credibility concerns; small differences among providers; lack of “newsworthiness”; the difficulty of using report cards for urgent or emergent situations; and the finite ability of highly ranked providers to accept increased demand.49-51 Professor Judith Hibbard and colleagues have suggested report card formats that enhance the ability of consumers to accurately interpret accurate report cards, including visual aids (e.g., star ratings) that synthesize complex information into easily understandable signals.52, 53 A recent Kaiser Family Foundation survey54 suggests that, particularly among more educated patients, the use of objective ratings to choose providers has steadily increased over the past decade, and health reform is likely to accelerate this trend.

The potential benefits of public reporting must be weighed against the unintended negative consequences, such as “gaming” of the reporting system.55, 56 The most concerning negative consequence is risk aversion, the reluctance of physicians and surgeons to accept high-risk patients because of their anticipated negative effect on their report card ratings. Because these highest risk patients may derive the greatest benefit from aggressive intervention, risk aversion may produce a net decrement in public health and a net increase in long-term costs because the best treatments were not initially used.57-59 Risk aversion unquestionably exists, but its extent and overall population impact are difficult to quantify. CABG risk aversion may have occurred in New York60, 61 and Pennsylvania,48 but studies in California62 and England63 have not demonstrated similar findings. Numerous studies document probable risk aversion in percutaneous coronary interventions.64-66 Possible approaches to mitigate risk aversion include demonstrating to providers the adequacy of risk adjustment and modifying those models when appropriate; excluding highest risk patients from reporting; separate reporting of highest risk patients; and careful clinical review of patients turned down for interventions.

Irrespective of its end results, many believe that public reporting is a fundamental ethical obligation of physicians.67, 68 It addresses the patient right of autonomy or self-determination in decisionmaking. Whether or not they choose to exercise this right, patients making a choice about treatments should be fully informed which arguably includes their right to know the comparative performance of potential providers.

When a decision has been made to publicly report outcomes, such measures must meet strict criteria. Professional organizations have emphasized the need to use high quality, audited clinical data whenever possible, and to employ the most appropriate statistical methodologies.69, 70 Professional society guidelines provide recommendations of varying strength and evidence strength, whereas performance measures should be a select subset of these guidelines that have the highest level of evidence and strongest class of recommendation (e.g., ACC/AHA [American College of Cardiology/American Heart Association] class 1[recommended] or 3 [not indicated or harmful], level A evidence). National Quality Forum (NQF) requirements for performance measure endorsement have recently been updated. In addition to its four basic requirements of Importance, Scientific Acceptability, Usability, and Feasibility, NQF emphasizes the need for robust, systematic evaluation of the evidence base and comprehensive testing of reliability and validity.8, 71, 72

The unit of analysis in public reporting may be controversial. Many states report results for some procedures at the physician or surgeon level, but in many health care areas sample sizes and the small amount of variation attributable to the physician make it difficult to reliably discriminate performance.73-75 Compiling data from a variety of process and outcome endpoints may help to mitigate sample size issues, as may aggregation of results over multiple years.

Report cards at the individual physician level may be more likely to cause risk aversion compared with group- or hospital-level reports. Changes in health care delivery models must also be considered. As patient care is increasingly provided by teams of providers that may even cross traditional specialty boundaries, individual physician reporting may become less relevant and feasible. Reimbursement will increasingly be based on the overall care provided to a patient or population, and leaders will have a direct financial incentive to assess the performance of individual physicians in such care groups (e.g., Accountable Care Organizations or ACOs), whether or not such results are publicly reported.

10. Use of QI Registry Data for Research Studies

An emerging trend is the use of data from QI registries to support additional studies. QI registries may collect large volumes of clinical data that can be used to support research studies. Studies using data from QI registries generally are developed in one of two ways.

First, the registry may be modified to collect additional data for a substudy For example, a registry may collect in-hospital data on patients admitted to the hospital for a specific procedure. To study long-term outcomes of the procedure, the registry protocol may be modified to collect followup data for a subset of patients. An example of this approach was the OPTIMIZE-HF registry, which collected in-hospital data on patients admitted with heart failure. A subset of patients provided consent to be contacted after 6 months to collect additional data.3 QI registries can also be modified to support other types of studies, such as studies where a subset of participating sites are randomized (cluster randomization) or a subset of patients are randomized (experimental trial). When modifying the registry protocol to support a substudy, the impact on the primary purpose of the registry must be considered as well as any additional ethical or regulatory requirements introduced by the new data collection effort.

A second approach to using QI registries to support additional studies is to use the registry data, either alone or linked to another data set. For example, a registry that collects in-hospital data may be linked to a claims database to obtain information on long-term outcomes or to examine other questions.76 In these cases, the technical, legal, and ethical considerations related to linking registry data sets discussed in Chapter 16. Regardless of which approach is used, researchers using data from a QI registry for additional research studies must understand how the data are collected and how patients are enrolled in the primary registry in order to draw appropriate conclusions from the new study.

11. Limitations of Current QI Registries

To summarize some of the key points above, the ideal QI registry collects uniform data on risk factors, treatments, and outcomes at key points for a particular disease or treatment. It obtains the data from multiple sources and across care settings, and leverages existing health information technology systems through interoperability and other data sets (from registries, claims, national indices, etc.) through linkage. Such a registry uses standardized methods to ensure that the patients sampled are representative, that data are of high quality and that it is comparable across providers. Such registries provide feedback at the patient and population level, and, in addition to facilitating quality improvement, they perform quality reporting to third parties. Importantly, they maintain high levels of participation by providers and patients and have a long term, sustainable business model.

Clearly, most QI registries do not achieve this ideal. The term “QI registries” is currently used to refer to a broad spectrum of registries, from local or regional registries aimed at improving care for a specific patient population to large, national registries with sophisticated benchmarking data. Many current QI registries focus on isolated conditions or procedures (e.g., the ACC NCDR Cath/PCI Registry77; the STS Adult Cardiac Surgery Database78). Health reform will require the acquisition of data about the overall, comprehensive care of conditions such as coronary artery disease, or of populations.6 This may be facilitated by linkages among related data registries, which might include outpatient preventive care, inpatient acute care and procedures, rehabilitation, and chronic disease management.

Current QI registries also have temporal limitations. They characteristically collect data only in-hospital or for 30 days after admission or a procedure. However, patients, payers, and regulators are also interested in longer term, longitudinal outcomes such as survival, readmission, reintervention, and cumulative resource use. Such information is useful for shared decisionmaking and for comparative effectiveness research. By linking together robust clinical data registries and administrative databases such as MEDPAR or the Social Security Death Master File79, 80 that provide long-term data, many of these current limitations of clinical registries would be mitigated.

In order for such linkages to be implemented a number of challenges would need to be overcome. These include a lack of standardized data sets; difficulties collecting data across care settings; inability to leverage existing health information technology systems to reduce duplication of clinician effort; inability to link to other data sources that might reduce data collection burden or enrich outcomes; significant variation in the quality of methods used to collect and report data; and quite different levels of participation and business models. Even registries in related conditions may not be fully compatible.

Potential solutions to such issues have been identified.81 These include, for example, condition-specific and cross-condition efforts to standardize common or core data element specifications, data quality and audit standards, and methodological considerations such as risk adjustment. Collecting data across care settings will be improved by solving the patient identity management issues (discussed in Chapter 17), which will require clarification and perhaps revision of HIPAA and Common Rule regulations. Overcoming interoperability issues through the promulgation of open standards (e.g., Healthcare Information Technology Standards Panel TP-50) (as described in Chapter 15) could have dramatic impact if adopted widely by EHR systems and registries.

Significant hospital data collection costs are additional limitations of clinical registries. Some data elements such as laboratory values may be automatically extracted from EHRs, but detailed clinical data may still require manual extraction. Existing national registries must develop sustainable business models, and there must be incentives and assistance for the development of new registries where none currently exist.

12. Summary

QI registries have documented success at improving quality of care at the local, regional, and national levels. While QI registries differ in their area of focus, choice of measures, and level of reporting, their consistent features are the use of systematic data collection and other tools to improve quality of care. QI registries also differ from other types of registries in many ways, such as in their use of provider “champions,” the inclusion of actionable measures, the frequency of major changes to the registry data collection, the motivations for participation, and the use of blinded or unblinded quality reports to providers, and in some cases, the public. Because of these differences, QI registries must address unique challenges, particularly in the planning, design, and operations phases.

Case Examples for Chapter 22

Case Example 53Using recognition measures to develop a data set

DescriptionGet With The Guidelines® is the flagship program for in-hospital quality improvement of the American Heart Association and American Stroke Association . The Get With The Guidelines—Stroke program supports point of care data collection and real-time reports aligned with the latest evidence-based guidelines. The reports include achievement, quality, reporting, and descriptive measures that allow hospitals to trend their performance related to clinical and process outcomes.
SponsorAmerican Heart Association/American Stroke Association
Year Started2003
Year EndedOngoing
No. of Sites1,664
No. of Patients2,063,439

Challenge

The primary purpose of the Get With The Guidelines—Stroke program is to improve the quality of in-hospital care for stroke patients. The program uses the PDSA (plan, do, study, act) quality improvement cycle, in which hospitals plan quality improvement initiatives, implement them, study the results, and then make adjustments to the initiatives. To help hospitals implement this cycle, the program uses a registry to collect data on stroke patients and generate real-time reports showing compliance with a set of standardized stroke recognition and quality measures. The reports also include benchmarking capabilities, enabling hospitals to compare themselves with other hospitals at a national and regional level, as well as with similar hospitals based on size or type of institution.

In developing the registry, the team faced the challenge of creating a data set that would be comprehensive enough to satisfy evidence-based medicine but manageable by hospitals participating in the program. The program does not provide reimbursements to hospitals entering data, so it needed to keep the data set as small as possible while still maintaining the ability to measure quality improvement.

Proposed Solution

The team began developing the data set by working backward from the recognition measures. Recognition measures, based on the sponsor's guidelines for stroke care, contain detailed inclusion and exclusion criteria to determine the measure population, and they group patients into denominator and numerator groups. Using these criteria, the team developed a data set that framed the questions necessary to determine compliance with each of the guidelines. The team then added questions to gather information on the patient population characteristics. Since the inception of the program, data elements and measure reports have been added or updated to maintain alignment with the current stroke guidelines. Over time, certain measures have also been promoted to or demoted from the higher tiers of recognition measures, depending on current science and changes in quality improvement focus.

Results

By using this approach, the registry team was able to create the necessary data set for measuring compliance with stroke guidelines. The program was launched in 2003 and now has 1,664 hospitals and 2,063,439 stroke patient records. The data from the program have been used in several abstracts and have resulted in 38 manuscripts since 2007.

Key Point

Registry teams should focus on the outcomes or endpoints of interest when selecting data elements. In cases where compliance with guidelines or quality measures is the outcome of interest, teams can work backward from the guidelines or measures to develop the minimum necessary data set for their registry.

For More Information

http://www.heart.org

Schwamm L, Fonarow G, Reeves M, et al. Get With the Guidelines—Stroke is associated with sustained improvement in care for patients hospitalized with acute stroke or transient ischemic attack. Circulation. 2009;119:107–11. [PubMed: 19075103].

Schwamm LH, LaBresh KA, Albright D, et al. Does Get With The Guidelines improve secondary prevention in patients hospitalized with ischemic stroke or TIA? Stroke. 2005;36(2):416–P84.

LaBresh KA, Schwamm LH, Pan W, et al. Healthcare disparities in acute intervention for patients hospitalized with ischemic stroke or TIA in Get With The Guidelines—Stroke. Stroke. 2005;36(2):416–P275.

Case Example 54Managing care and quality improvement for chronic diseases

DescriptionThe Tri State Child Health Services Web-based asthma registry is part of an asthma improvement collaborative aimed at improving evidence-based care and outcomes while strengthening improvement capacity of primary care practices.
SponsorTri State Child Health Services, Inc., a pediatric physician-hospital organization (PHO) affiliated with Cincinnati Children's Hospital Medical Center
Year Started2003
Year EndedOngoing
No. of Sites39 community-based pediatric practices
No. of Patients12,365 children with asthma

Challenge

Asthma, a highly prevalent chronic disease managed in the primary care setting, has proven to be amenable to quality improvement initiatives.

This collaborative effort between the PHO and Cincinnati Children's Hospital Medical Center was initiated in 2003 with goals of improving evidence-based care, reducing adverse outcomes such as asthma-related emergency room visits and missed schooldays, and strengthening the quality of knowledge and capacity within primary care practices. As the asthma initiative spans 39 primary care practices and encompasses approximately 35 percent of the region's pediatric asthma population, the PHO needed to implement strategies for improving network-level, population-based process and outcome measures.

Proposed Solution

To address the project's focus on improving process and outcome measures across a large network, the asthma collaborative decided to implement a centralized Web-based asthma registry. Key measures of effective control and management of asthma (based on the National Heart, Lung, and Blood Institute's guidelines) are captured via a self-reported clinical assessment form and decision support tool completed by parents and physicians at the point of care. The questions address missed schooldays and workdays, parent's confidence in managing asthma, health resource utilization (e.g., emergency room visits), parent and physician rating of disease control, and other topics. In addition, the clinical assessment form facilitates interactive dialogue between the physician and family during office visits.

The Web-based registry allows real-time reporting at the patient, practice, and network level. Reporting is transparent, with comparative practice data that support the identification of best practices and shared learning. In addition, reporting functionalities support tracking of longitudinal data and the identification of high-risk patients. The Web-based registry also provides access to real-time utilization reports with emergency room visit and admission dates. All reports are available to participating practices and physicians at any time.

Results

The registry provides essential data for identifying best practices and tracking improvement. The network has documented improvement against standard process and outcome measures.

Key Point

Registries can be useful tools for quality improvement initiatives in chronic disease areas. By collecting standardized data and sharing the data in patient-, practice-, and network-level reports, registries can track adherence to guidelines and evidence-based practices, and provide information to support ongoing quality improvement.

For More Information

Mandel KE, Kotagal UR. Pay for performance alone cannot drive quality. Arch Pediatr Adolesc Med. 2007;161(7):650–5. [PubMed: 17606827].

Case Example 55Use of reporting tools to promote quality improvement

DescriptionThe Quality Oncology Practice Initiative (QOPI®) is a quality assessment and improvement program for oncology practices.
SponsorAmerican Society of Clinical Oncology (ASCO)
Year StartedPilot program started in 2002; registry launched for full ASCO membership in 2006
Year EndedOngoing
No. of Sites801 registered practices
No. of PatientsApproximately 50,000 patient charts per year

Challenge

The 1999 Institute of Medicine report “Ensuring Quality Cancer Care” identified the opportunity for quality improvement initiatives in oncology. A clear path to nationwide impact was identified, starting with individual practices. The report set forth recommendations including to “measure and monitor the quality of care using a core set of quality measures.” In order to promote this endeavor, a methodology and a registry were needed.

Proposed Solution

In 2002, ASCO, in conjunction with a community of oncologists, developed QOPI, a voluntary pilot program to allow participants to assess and improve cancer care within their own practices. The oncologist-led program created quality measures, developed methodology for data collection and analysis, and tested the feasibility of the pilot program before offering access to the registry to all Society members in 2006. The registry provides comparison data to practices on more than 100 quality metrics that practices and practitioners can use to compare their performance to that of their peers, at both the practice and practitioner level. A team of oncologists, researchers, and staff select, adapt, and develop metrics based on clinical guidelines and expert consensus opinion. Practices and institutions register and manually submit abstracted patient chart data through a Web-based interface during twice-per-year data collection periods. Once the data collection periods close, the data are analyzed and practices can view reports showing their performance and scores based on quality measures for that round.

Results

Approximately 600 practices representing nearly 15 percent of U.S. practitioners have now contributed data to the registry. Changes in performance rates have been compared among metrics surrounding the following domains: core, end of life, symptom management, breast cancer, colorectal cancer, and Non-Hodgkin lymphoma. For example, in a 2010 analysis of registry data, practices completing multiple data collection cycles with the registry had better performance on care of pain for end-of-life care (63%) when compared with practices participating in the registry for the first time (47%). Registry participants who participated in multiple data collection cycles also demonstrated better performance in the rate of documenting discussions of hospice and palliative care, and higher rates of hospice enrollment, when compared with participants who participated in just a single cycle.

Key Point

Access to performance reports can inform physician behavior or be used to demonstrate the need for process improvements within a practice. A registry can provide a systematic approach to data collection to support the ongoing use of self-assessment and benchmark performance reports to facilitate quality improvement.

For More Information

http://qopi.asco.org/

Blayney DW, Severson J, Martin CJ, et al. Michigan oncology practices showed varying adherence rates to practice guidelines, but quality interventions improved care. Health Aff. 2012 April;31(4):718–8 [PubMed: 22492888].

Campion FX, Larson LR, Kadlubek PJ, et al. Advancing performance measurement in oncology: Quality Oncology Practice Initiative participation and quality outcomes. J Oncol Pract. 2011 May 1:31s–35s. [PMC free article: PMC3092462] [PubMed: 21886517].

Jacobson JO, Neuss MN, McNiff KK, et al. Improvement in oncology practice performance through voluntary participation in the Quality Oncology Practice Initiative. J Clin Oncol. 2008;26:1893–8. [PubMed: 18398155].

Neuss MN, Gilmore TR, Kadlubek PJ. Tools for measuring and improving the quality of oncology care: The Quality Oncology Practice Initiative (QOPI®) and the QOPI Certification Program. Oncology (Williston Park). 2011 Sep;25(10):880, 883, 886–7. [PubMed: 22010382].

Case Example 56Using registries to drive quality improvement in chronic conditions

DescriptionThe National Parkinson Foundation Quality Improvement Initiative is a registry-based quality care program that captures longitudinal data on clinical interventions and patient-reported outcomes to identify, implement, and disseminate best practices for the treatment and management of Parkinson's disease.
SponsorNational Parkinson Foundation
Year Started2009
Year EndedOngoing
No. of Sites20 centers across United States, Canada, and internationally
No. of Patients5,000 patients as of May 2012; 20,000 targeted enrollment

Challenge

Parkinson's disease (PD), an incurable, progressive neurogenerative disorder associated with a high burden of disease, presents unique challenges for quality improvement initiatives. Treatments for PD generally focus on reducing patients' symptoms and improving quality of life. Unlike other chronic conditions where improvement can be measured in terms of well-defined outcomes such as survival or cardiovascular events, quality improvement in PD can best be measured using patient-based outcomes. However, identifying appropriate patient-based outcomes for this disease can be a challenge. In addition, variability exists in the clinical diagnosis, management, and treatment of PD. Studies have shown that PD patients treated by a neurologist experience better outcomes, such as a decrease in hip fractures or nursing home placement. However, the specific management and treatment strategies used by these specialists have not been studied or well-described. The lack of evidence-based treatment standards warranted a data-driven approach to identify and understand best practices that improve the quality of care and quality of life for PD patients.

Proposed Solution

In 2009, the National Parkinson Foundation launched an initiative to improve the quality of care in PD. To support an evidence-based approach, the foundation initiated a PD registry to capture clinical interventions and patient-reported outcomes over time from multiple centers across the United States, Canada, and internationally. The initiative, led by a steering committee of movement disorders neurologists, is a unique effort in PD research because of its ability to collect long-term, longitudinal data from multiple centers and its focus on patient-based outcomes data, rather than process of care measures. The aims of the registry are to accelerate clinical discovery, promote collaborative science, and drive advancements in clinical practice toward patient-centered care.

Results

As of May 2012, the registry included more than 5,000 patients from 20 centers; second- and third-year data were available for 3,000 and 500 patients, respectively. Patients' encounter-based data, including demographics, comorbidities, hospitalizations, falls, medications, treatments, and outcomes, are collected annually on brief data collection forms. The registry database includes a diverse population of PD patients, and analyses have confirmed variation in practice patterns across centers. The registry data have yielded important findings, including enhanced understanding of factors and predictors of patients' quality of life and caregiver burden. Additional cross-sectional and longitudinal analyses are planned using physician care and patient outcome data to describe practice patterns across the registry, identify and improve understanding of best practices, and support the development of guidelines.

Many neurologists were initially doubtful about the value of a registry in this disease area. For the most part, their past experience was with mortality-based registries based around interventions or fatal illnesses; these failed to model a disease with complex, heterogeneous symptomology, where the pathology could not be directly measured. Increasingly providers have recognized the value of the statistical power and nuanced insight that can be leveraged in this large and detailed registry of expert care.

Key Point

Registry-based quality improvement programs can be useful in many clinical settings, from in-hospital care (e.g., heart failure) to chronic progressive diseases (e.g., PD). The design of the registry and the quality improvement initiative must reflect the nature of the disease and the state of existing evidence. For chronic, progressive diseases, registries can be useful tools for identifying, developing, and disseminating guidelines for best practices to improve quality of care.

For More Information

http://www.parkinson.org/Improving-Care/Research/Quality-Improvement-Initiative

Okun MS, Siderowf A, Nutt JG, et al. Piloting the NPF data-driven quality improvement initiative. Parkinsonism and Related Disorders. 16:517–21. 201. [PubMed: 20609611].

Case Example 57Clarifying the Federal regulatory requirements for quality improvement registries

DescriptionThe National Neurosurgery Quality and Outcomes Database (N2QOD) is a prospective, longitudinal registry designed to measure and improve neurosurgical and spine surgical care as it exists in the real-world health care setting.
SponsorAmerican Association of Neurological Surgeons (AANS)
Year Started2011
Year EndedOngoing
No. of Sites30 U.S. neuropractice groups expected in the first year
No. of Patients7,000 patients expected in the first year

Challenge

N2QOD was formed with the aim of measuring the quality of real-world neurosurgical and spine surgery care, and the registry defined that “quality” as safety and effectiveness. Given this definition, a patient outcome-centered approach to data collection was necessary. This patient-centeredness is aligned with the priorities of groups such as the Patient-Centered Outcomes Research Institute and the Agency for Healthcare Research and Quality, and reflects a wider trend in quality improvement (QI) science, moving away from processes and process-based measures to patient outcomes and outcome measures.

This move towards patient outcomes necessitates a shift in the way QI registries interact with patients. It also presents challenges for institutional review boards (IRBs) reviewing these projects. IRBs can determine that these projects are either “health care operations” or “human subjects research,” as defined by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Common Rule. If an IRB determines that a registry constitutes “health care operations” (i.e., data collection used for issues such as clinical care, administrative use, or quality assessment), then neither IRB approval nor informed consent is required. If an IRB determines that a registry constitutes “human subjects research,” the registry falls under IRB purview, and the IRB may determine that informed consent is required of registry participants, or it may grant a waiver of informed consent.

Whether an IRB determines a registry to be “health care operations” or “human subjects research” can have profound operational and analytic impacts on the registry. In particular, QI registries designated by an IRB as research and required to collect informed consent from participants can experience a reduction in enrollment numbers, and are exposed to the risk of selection bias being introduced into the registry population.

The registry was introduced to neurosurgical practice sites in January 2011, and was initially reviewed by 11 IRBs over a 4-month period. Six of those evaluations resulted in classifications of the registry as quality improvement (QI). The remaining five IRBs classified the same project description as human subjects research, and insisted on full IRB oversight and the requirement for informed consent.

Proposed Solution

Given this mixed interpretation of Federal regulations from local IRBs, the AANS approached the Department of Health and Human Services' (HHS) Office of Human Research Protections (OHRP) in May 2011 to request a formal review of the registry. AANS and OHRP engaged in regular communication over the course of several months, and convened a multistakeholder meeting at the White House that included representatives from OHRP, the Office of the President, Centers for Medicare and Medicaid Services, U.S. Food and Drug Administration, Department of Veterans Affairs, HHS Office of Civil Rights, and three clinical specialty societies, including neurosurgery.

Results

In August 2011, OHRP clarified that, based on these communications and an examination of the registry, the sites participating in N2QOD were not engaged in human subjects research, and therefore the regulations requiring IRB oversight did not apply. This communication from OHRP is now provided to sites enrolling in N2QOD to support their IRB review process.

At the time of this writing, 28 IRBs have formally reviewed or re-reviewed the registry. To date, 27 of the IRBs have classified it as health care operations and have waived the requirement for IRB review. The remaining IRB has classified the same project description as research, and has issued a waiver of consent for the project. Approximately 30 additional sites are still in various stages of institutional review. In summary, the OHRP opinion strongly influenced local IRB analyses of the registry.

In July 2011, OHRP released an Advanced Notice of Proposed Rulemaking (ANPRM) for Revision to the Common Rule. These revisions are intended to improve human subject research while also reducing burdens, delays, and ambiguity for investigators and research subjects.

Key Point

QI registries that are focused on patient outcomes should be aware of the complexities around varied interpretation by multiple IRBs and should plan sufficient time and resources to address these complexities.

For More Information

http://www.neuropoint.org

Department of Health and Human Services; Office of Human Research Protections. ANPRM for Revision to Common Rule. [20 June 20, 2012]. http://www.hhs.gov/ohrp/humansubjects/anprm2011page.html..

Neuropoint Alliance, Inc. The National Neurosurgery Quality and Outcomes Database (N2QOD): A Prospective Registry for Quality Reporting. Background Project Description, Application of Relevant Federal Regulations and Project Implementation. http://www.neuropoint.org/pdf/N2QOD%20Project%20Description%20V5%20(25APR2012).pdf..

References for Chapter 22

1.
LaBresh KA, Gliklich R, Liljestrand J, et al. Using “get with the guidelines” to improve cardiovascular secondary prevention. Jt Comm J Qual Saf. 2003 Oct;29(10):539–50. [PubMed: 14567263]
2.
Raval MV, Bentrem DJ, Eskandari MK, et al. The role of Surgical Champions in the American College of Surgeons National Surgical Quality Improvement Program—a national survey. J Surg Res. 2011 Mar;166(1):e15–25. [PubMed: 21176914]
3.
Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007 Jan 3;297(1):61–70. [PubMed: 17200476]
4.
Lee JS, Primack BA, Mor MK, et al. Processes of care and outcomes for community-acquired pneumonia. Am J Med. 2011 Dec;124(12):1175 e9–17. [PMC free article: PMC3578284] [PubMed: 22000624]
5.
Morse RB, Hall M, Fieldston ES, et al. Hospital-level compliance with asthma care quality measures at children's hospitals and subsequent asthma-related outcomes. JAMA. 2011 Oct 5;306(13):1454–60. [PubMed: 21972307]
6.
Porter ME. What is value in health care? N Engl J Med. 2010 Dec 23;363(26):2477–81. [PubMed: 21142528]
7.
Institute of Medicine. Performance Measurement: Accelerating Improvement. Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs; [August 20, 2012]. http://iom​.edu/Reports​/2005/Performance-Measurement-Accelerating-Improvement​.aspx.
8.
National Quality Forum. What NQF Endorsement Means. [August 20, 2012]. http://www​.qualityforum​.org/Measuring_Performance​/ABCs/What_NQF​_Endorsement_Means.aspx.
9.
10.
Measure Applications Partnership. National Quality Forum; [August 20, 2012]. http://www​.qualityforum​.org/Setting_Priorities​/Partnership/Measure​_Applications_Partnership.aspx.
11.
Casarett D, Karlawish JH, Sugarman J. Determining when quality improvement initiatives should be considered research: proposed criteria and potential implications. JAMA. 2000 May 3;283(17):2275–80. [PubMed: 10807388]
12.
Dokholyan RS, Muhlbaier LH, Falletta JM, et al. Regulatory and ethical considerations for linking clinical and administrative databases. Am Heart J. 2009 Jun;157(6):971–82. [PubMed: 19464406]
13.
Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med. 2007 May 1;146(9):666–73. [PubMed: 17438310]
14.
Nerenz DR. Ethical issues in using data from quality management programs. Eur Spine J. 2009 Aug;18 Suppl 3:321–30. [PMC free article: PMC2899322] [PubMed: 19365642]
15.
Johnson N, Vermeulen L, Smith KM. A survey of academic medical centers to distinguish between quality improvement and research activities. Qual Manag Health Care. 2006 Oct-Dec;15(4):215–20. [PubMed: 17047495]
16.
Tu JV, Willison DJ, Silver FL, et al. Impracticability of informed consent in the Registry of the Canadian Stroke Network. N Engl J Med. 2004 Apr 1;350(14):1414–21. [PubMed: 15070791]
17.
U.S. Department of Health and Human Services; Office for Human Research Protections. Quality Improvement Activities - FAQs. [August 15, 2012]. http://answers​.hhs.gov​/ohrp/categories/1569.
18.
U.S. Department of Health and Human Services; Office for Human Research Protections. How does HHS view quality improvement activities in relation to the regulations for human research subject protections? Quality Improvement Activities - FAQs. [December 20, 2013]. http://answers​.hhs.gov​/ohrp/questions/7281.
19.
U.S. Department of Health and Human Services; Office for Human Research Protections. Do the HHS regulations for the protection of human subjects in research (45 CFR part 46) apply to quality improvement activities conducted by one or more institutions whose purposes are limited to: (a) implementing a practice to improve the quality of patient care, and (b) collecting patient or provider data regarding the implementation of the practice for clinical, practical, or administrative purposes? Quality Improvement Activities - FAQs. [December 20, 2013]. http://answers​.hhs.gov​/ohrp/questions/7282.
20.
U.S. Department of Health and Human Services; Office for Human Research Protections. Can I analyze data that are not individually identifiable, such as medication databases stripped of individual patient identifiers, for research purposes without having to apply the HHS protection of human subjects regulations? Quality Improvement Activities – FAQs. [December 20, 2013]. http://answers​.hhs.gov​/ohrp/questions/7284.
21.
Emanuel EJ, Menikoff J. Reforming the regulations governing research with human subjects. N Engl J Med. 2011 Sep 22;365(12):1145–50. [PubMed: 21787202]
22.
Hannan EL, Kilburn H Jr., O'Donnell JF, et al. Adult open heart surgery in New York State. An analysis of risk factors and hospital mortality rates. JAMA. 1990 Dec 5;264(21):2768–74. [PubMed: 2232064]
23.
Hannan EL, Kumar D, Racz M, et al. New York State's Cardiac Surgery Reporting System: four years later. Ann Thorac Surg. 1994 Dec;58(6):1852–7. [PubMed: 7979781]
24.
O'Connor GT, Plume SK, Olmstead EM, et al. A regional prospective study of in-hospital mortality associated with coronary artery bypass grafting. The Northern New England Cardiovascular Disease Study Group. JAMA. 1991 Aug 14;266(6):803–9. [PubMed: 1907669]
25.
O'Connor GT, Plume SK, Olmstead EM, et al. Multivariate prediction of in-hospital mortality associated with coronary artery bypass graft surgery. Northern New England Cardiovascular Disease Study Group. Circulation. 1992 Jun;85(6):2110–8. [PubMed: 1591830]
26.
Grover FL, Hammermeister KE, Shroyer AL. Quality initiatives and the power of the database: what they are and how they run. Ann Thorac Surg. 1995 Nov;60(5):1514–21. [PubMed: 8526678]
27.
Grover FL, Johnson RR, Marshall G, et al. Factors predictive of operative mortality among coronary artery bypass subsets. Ann Thorac Surg. 1993 Dec;56(6):1296–306. discussion 306-7. [PubMed: 8267428]
28.
Grover FL, Johnson RR, Shroyer AL, et al. The Veterans Affairs Continuous Improvement in Cardiac Surgery Study. Ann Thorac Surg. 1994 Dec;58(6):1845–51. [PubMed: 7979780]
29.
Edwards FH, Grover FL, Shroyer AL, et al. The Society of Thoracic Surgeons National Cardiac Surgery Database: current risk assessment. Ann Thorac Surg. 1997 Mar;63(3):903–8. [PubMed: 9066436]
30.
Edwards FH, Clark RE, Schwartz M. Coronary artery bypass grafting: the Society of Thoracic Surgeons National Database experience. Ann Thorac Surg. 1994 Jan;57(1):12–9. [PubMed: 8279877]
31.
Grover FL, Shroyer AL, Hammermeister K, et al. A decade's experience with quality improvement in cardiac surgery using the Veterans Affairs and Society of Thoracic Surgeons national databases. Ann Surg. 2001 Oct;234(4):464–72. discussion 72-4. [PMC free article: PMC1422070] [PubMed: 11573040]
32.
Blumberg MS. Comments on HCFA hospital death rate statistical outliers. Health Care Financing Administration. Health Serv Res. 1987 Feb;21(6):715–39. [PMC free article: PMC1068986] [PubMed: 3106265]
33.
O'Connor GT, Plume SK, Olmstead EM, et al. A regional intervention to improve the hospital mortality associated with coronary artery bypass graft surgery. The Northern New England Cardiovascular Disease Study Group. JAMA. 1996 Mar 20;275(11):841–6. [PubMed: 8596221]
34.
Ferguson TB Jr., Peterson ED, Coombs LP, et al. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: a randomized controlled trial. JAMA. 2003 Jul 2;290(1):49–56. [PubMed: 12837711]
35.
Tu JV, Donovan LR, Lee DS, et al. Effectiveness of public report cards for improving the quality of cardiac care: the EFFECT study: a randomized trial. JAMA. 2009 Dec 2;302(21):2330–7. [PubMed: 19923205]
36.
Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood). 2003 Mar-Apr;22(2):84–94. [PubMed: 12674410]
37.
Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008 Jan 15;148(2):111–23. [PubMed: 18195336]
38.
Hannan EL, Siu AL, Kumar D, et al. The decline in coronary artery bypass graft surgery mortality in New York State. The role of surgeon volume. JAMA. 1995 Jan 18;273(3):209–13. [PubMed: 7807659]
39.
Hannan EL, Kilburn H Jr., Racz M, et al. Improving the outcomes of coronary artery bypass surgery in New York State. JAMA. 1994 Mar 9;271(10):761–6. [PubMed: 8114213]
40.
Ghali WA, Ash AS, Hall RE, et al. Statewide quality improvement initiatives and mortality after cardiac surgery. JAMA. 1997 Feb 5;277(5):379–82. [PubMed: 9010169]
41.
Peterson ED, DeLong ER, Jollis JG, et al. The effects of New York's bypass surgery provider profiling on access to care and patient outcomes in the elderly. J Am Coll Cardiol. 1998 Oct;32(4):993–9. [PubMed: 9768723]
42.
Guru V, Fremes SE, Naylor CD, et al. Public versus private institutional performance reporting: what is mandatory for quality improvement? Am Heart J. 2006 Sep;152(3):573–8. [PubMed: 16923433]
43.
Hannan EL, Sarrazin MS, Doran DR, et al. Provider profiling and quality improvement efforts in coronary artery bypass graft surgery: the effect on short-term mortality among Medicare beneficiaries. Med Care. 2003 Oct;41(10):1164–72. [PubMed: 14515112]
44.
Vladeck BC, Goodwin EJ, Myers LP, et al. Consumers and hospital use: the HCFA “death list” Health Aff (Millwood). 1988 Spring;7(1):122–5. [PubMed: 3360387]
45.
Hannan EL, Stone CC, Biddle TL, et al. Public release of cardiac surgery outcomes data in New York: what do New York state cardiologists think of it? Am Heart J. 1997 Jul;134(1):55–61. [PubMed: 9266783]
46.
Chassin MR. Achieving and sustaining improved quality: lessons from New York State and cardiac surgery. Health Aff (Millwood). 2002 Jul-Aug;21(4):40–51. [PubMed: 12117152]
47.
Schneider EC, Epstein AM. Use of public performance reports: a survey of patients undergoing cardiac surgery. JAMA. 1998 May 27;279(20):1638–42. [PubMed: 9613914]
48.
Schneider EC, Epstein AM. Influence of cardiac-surgery performance reports on referral practices and access to care. A survey of cardiovascular specialists. N Engl J Med. 1996 Jul 25;335(4):251–6. [PubMed: 8657242]
49.
Mukamel DB, Weimer DL, Mushlin AI. Interpreting market share changes as evidence for effectiveness of quality report cards. Med Care. 2007 Dec;45(12):1227–32. [PubMed: 18007175]
50.
Mukamel DB, Mushlin AI. The impact of quality report cards on choice of physicians, hospitals, and HMOs: a midcourse evaluation. Jt Comm J Qual Improv. 2001 Jan;27(1):20–7. [PubMed: 11147237]
51.
Romano PS, Zhou H. Do well-publicized risk-adjusted outcomes reports affect hospital volume? Med Care. 2004 Apr;42(4):367–77. [PubMed: 15076814]
52.
Hibbard JH, Peters E. Supporting informed consumer health care decisions: data presentation approaches that facilitate the use of information in choice. Annu Rev Public Health. 2003;24:413–33. [PubMed: 12428034]
53.
Hibbard JH, Peters E, Slovic P, et al. Making health care quality reports easier to use. Jt Comm J Qual Improv. 2001 Nov;27(11):591–604. [PubMed: 11708039]
54.
Henry J. Kaiser Family Foundation 2008 Update on Consumers' Views of Patient Safety and Quality Information. Kaiser Family Foundation; 2008. [August 20, 2012]. http://search​.kff.org​/gsaresults/search?site​=KFForgnopdfs&filter​=0&output​=xml_no_dtd&client​=kff&sp​=kff&getfields​=*&q​=7819&no_pdf=1.
55.
Shahian DM, Normand SL, Torchiana DF, et al. Cardiac surgery report cards: comprehensive review and statistical critique. Ann Thorac Surg. 2001 Dec;72(6):2155–68. [PubMed: 11789828]
56.
Green J, Wintfeld N. Report cards on cardiac surgeons. Assessing New York State's approach. N Engl J Med. 1995 May 4;332(18):1229–32. [PubMed: 7700321]
57.
Jones RH. In search of the optimal surgical mortality. Circulation. 1989 Jun;79(6 Pt 2):I132–6. [PubMed: 2785874]
58.
Lee TH, Torchiana DF, Lock JE. Is zero the ideal death rate? N Engl J Med. 2007 Jul 12;357(2):111–3. [PubMed: 17625122]
59.
Dranove D, Kessler DA, McClellan M, et al. Is more information better? The effects of “report cards” on health care providers. Journal of Political Economy. 2003;111:555–88.
60.
Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation. 1996 Jan 1;93(1):27–33. [PubMed: 8616936]
61.
Burack JH, Impellizzeri P, Homel P, et al. Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons. Ann Thorac Surg. 1999 Oct;68(4):1195–200. discussion 201-2. [PubMed: 10543479]
62.
Li Z, Carlisle DM, Marcin JP, et al. Impact of public reporting on access to coronary artery bypass surgery: the California Outcomes Reporting Program. Ann Thorac Surg. 2010 Apr;89(4):1131–8. [PubMed: 20338320]
63.
Bridgewater B, Grayson AD, Brooks N, et al. Has the publication of cardiac surgery outcome data been associated with changes in practice in northwest England: an analysis of 25,730 patients undergoing CABG surgery under 30 surgeons over eight years. Heart. 2007 Jun;93(6):744–8. [PMC free article: PMC1955202] [PubMed: 17237128]
64.
Moscucci M, Eagle KA, Share D, et al. Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases. J Am Coll Cardiol. 2005 Jun 7;45(11):1759–65. [PubMed: 15936602]
65.
Apolito RA, Greenberg MA, Menegus MA, et al. Impact of the New York State Cardiac Surgery and Percutaneous Coronary Intervention Reporting System on the management of patients with acute myocardial infarction complicated by cardiogenic shock. Am Heart J. 2008 Feb;155(2):267–73. [PubMed: 18215596]
66.
Resnic FS, Welt FG. The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention. J Am Coll Cardiol. 2009 Mar 10;53(10):825–30. [PMC free article: PMC2673987] [PubMed: 19264236]
67.
Clarke S, Oakley J. Informed consent and surgeons' performance. J Med Philos. 2004 Feb;29(1):11–35. [PubMed: 15449811]
68.
Clarke S, Oakley J. Informed consent and clinician accountability: the ethics of report cards on surgeon performance. Cambridge, UK: Cambridge University Press; 2007.
69.
Drozda JP Jr., Hagan EP, Mirro MJ, et al. ACCF 2008 health policy statement on principles for public reporting of physician performance data: A Report of the American College of Cardiology Foundation Writing Committee to develop principles for public reporting of physician performance data. J Am Coll Cardiol. 2008 May 20;51(20):1993–2001. [PubMed: 18482675]
70.
Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006 Jan 24;113(3):456–62. [PubMed: 16365198]
71.
National Quality Forum. Evidence Task Force Final Report. [August 20, 2012]. Availablet at: http://www​.qualityforum​.org/Measuring_Performance​/Improving_NQF_Process​/Evidence_Task_Force.aspx.
72.
National Quality Forum. Guidance for Measure Testing and Evaluating Scientific Acceptability of Measure Properties, Final Report. Jan, 2011. [August 20, 2012]. http://www​.qualityforum​.org/Measuring_Performance​/Improving_NQF_Process​/Measure_Testing​_Task_Force_Final_Report.aspx.
73.
Dimick JB, Welch HG, Birkmeyer JD. Surgical mortality as an indicator of hospital quality: the problem with small sample size. JAMA. 2004 Aug 18;292(7):847–51. [PubMed: 15315999]
74.
Hofer TP, Hayward RA, Greenfield S, et al. The unreliability of individual physician “report cards” for assessing the costs and quality of care of a chronic disease. JAMA. 1999 Jun 9;281(22):2098–105. [PubMed: 10367820]
75.
Fung V, Schmittdiel JA, Fireman B, et al. Meaningful variation in performance: a systematic literature review. Med Care. 2010 Feb;48(2):140–8. [PubMed: 20057334]
76.
Li Q, Glynn RJ, Dreyer NA, et al. Validity of claims-based definitions of left ventricular systolic dysfunction in Medicare patients. Pharmacoepidemiol Drug Saf. 2011 Jul;20(7):700–8. [PubMed: 21608070]
77.
National Cardiovascular Data Registry. [August 7, 2013]. https://www​.ncdr.com/webncdr/
78.
Society for Thoracic Surgeons National Database. [August 15, 2012]. http://www​.sts.org/national-database.
79.
Jacobs JP, Edwards FH, Shahian DM, et al. Successful linking of the Society of Thoracic Surgeons database to social security data to examine survival after cardiac operations. Ann Thorac Surg. 2011 Jul;92(1):32–7. discussion 8-9. [PubMed: 21718828]
80.
Jacobs JP, Edwards FH, Shahian DM, et al. Successful linking of the Society of Thoracic Surgeons adult cardiac surgery database to Centers for Medicare and Medicaid Services Medicare data. Ann Thorac Surg. 2010 Oct;90(4):1150–6. discussion 6-7. [PubMed: 20868806]
81.
Bufalino VJ, Masoudi FA, Stranne SK, et al. The American Heart Association's recommendations for expanding the applications of existing and future clinical registries: a policy statement from the American Heart Association. Circulation. 2011 May 17;123(19):2167–79. [PubMed: 21482960]

Views

  • PubReader
  • Print View
  • Cite this Page

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...