NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Totten AM, Miake-Lye IM, Vaiana ME, et al. Public Presentation of Health System or Facility Data about Quality and Safety: A Systematic Review [Internet]. Washington (DC): Department of Veterans Affairs (US); 2011 Oct.

Cover of Public Presentation of Health System or Facility Data about Quality and Safety: A Systematic Review

Public Presentation of Health System or Facility Data about Quality and Safety: A Systematic Review [Internet].

Show details

RESULTS

LITERATURE FLOW

We reviewed 370 titles and abstracts from the electronic search, one additional reference from reference mining, and 7 others from content experts, for a total of 378. After eliminating clearly irrelevant titles and abstracts, we had 117 references. We retrieved full-text of these articles for further review, and subsequently excluded 97 additional references. We identified a total of 18 references for inclusion in the current review to add to the 37 previously identified in the review by Fung and colleagues. We then grouped the studies by key question. Figure 1 details the exclusion criteria and the number of references related to each of the key questions.

Figure 1. Literature Flow.

Figure 1

Literature Flow.

From the Google search of “public reporting of quality information healthcare” (accessed on September 27, 2011) we took the top 30 hits. These were categorized as: websites of organizations that do public reporting, scholarly reports of public reporting (potentially eligible for this review), and other miscellaneous public reporting–related sites. The table lists the 30 hits and their classification.

Table 1. Results from the Google Search.

Table 1

Results from the Google Search.

Prior Reviews

This report is the third in a series of systematic reviews with a similar focus on the effects of public reporting on performance. The 2008 systematic review by Fung and colleagues served as a foundation for the current report's search strategies and evidence base.5 However, their scope was slightly different: they were examining how publishing performance data improves quality of care—in particular, they included individual provider data, which we do not include here.

Fung et al. identified 45 articles evaluating the impact of public reporting on quality-- 10 studies focused on public reporting of health plan data, 27 focused on hospital data, and 11 focused on individual provider data. These categories were not mutually exclusive, but we include only those articles examining public reporting of health plans or hospitals in the present report. They categorized their data in two steps. First, articles were categorized by the level of data: health plan, hospital, or individual providers. Then they were categorized by outcome: whether the public reporting targeted the selection pathway for improving performance, influenced quality improvement activity, affected clinical outcomes, or had unintended consequences (see Figure 2).

Figure 2. Two pathways for improving performance through release of publicly reported performance data.

Figure 2

Two pathways for improving performance through release of publicly reported performance data.

Fung et al. found an overall scarcity of data. However, the existing data suggested that public reporting stimulates quality improvement activity at the hospital level. In the other contexts examined (health plans, individual providers) and for other outcomes, its effects could not be stated with certainty.

An earlier review was published by Marshall and colleagues.8 They identified a total of 21 peer-reviewed publications, which reported studies of seven public reporting systems. They sought to answer two key questions: (1) Who uses public reports (consumers, purchasers, physicians, hospitals and other provider organizations); and (2) What is the impact of public reporting on quality of care outcomes and costs.

In response to the first question, Marshall et al. found that hospitals and other provider organizations appear to be the most responsive to publicly reported data, leading them to conclude that this pathway may be the most productive area for future research. They reported that consumers, purchasers, and physicians did not understand or trust performance data; these groups made only modest use of the data and public reporting had only slight effects on them. The limited number of studies they found addressing the second question supported an association between public reporting and improvements in health outcomes.

Reporting Systems That Have Been the Subject of Published Evaluations

Like the Marshall and Fung reviews, we found that a relatively small number of reporting systems have been evaluated (see Figure 3). Of the 47 articles we identified that evaluated a particular public reporting system, 15 concerned the New York State Cardiac Surgery Reporting System (CSRS), and another 7 concerned the Consumer Assessment of Health Plans (CAHPS) and 5 concerned the Cleveland Health Quality Choice program (CHQC). Thus these three public reporting systems account for more than half of the published evaluations. Yet a recent environmental scan of public reporting systems performed by Mathematica for the National Quality Forum identified 70 public reporting programs in the US.9 Consequently, we conclude that most US public reporting systems are not the subject of evaluation or research described in the peer reviewed literature.

Figure 3. Reporting Systems Represented.

Figure 3

Reporting Systems Represented. NYS CSRS=New York State Cardiac Surgery Reporting System; CAHPS=Consumer Assessment of Health Plans; CHQC= Cleveland Health Quality Choice program; HEDIS= Healthcare Effectiveness Data and Information Set; CA= Public reporting (more...)

KEY QUESTIONS #1 AND #2. What is known about the most effective way of displaying quality and safety information, comparative data about health system structure, services, and performance so that it is understandable? How do patients prefer to receive or access this information?

We identified two major sources of information about effective ways to report comparative quality information to health care consumers. The first, Best Practices in Public Reporting,10-12 is a recent series of reports that directly address the issue of how to present information to consumers. The series is part of the Learning Network tool set developed by the Agency for Healthcare Research and Quality.13 The tool set is intended to provide practical approaches to designing public reports that make health care performance information clear, meaningful, and usable by consumers, who may have limited time or motivation to access such information. While these reports are not systematic reviews per se, they were commissioned by AHRQ and written by leading authorities in the field and intended to present both empirical and experiential evidence specific to these two key questions.

The audiences for the reports include Chartered Value Exchanges and other community collaboratives. The reports, which provide general guidelines for presenting information, are intended for use by States, health plans, and purchasers involved in producing, packaging, promoting, and disseminating comparative health care quality and cost information for consumers, patients, and the general public.

The second major source is the Aligning Forces for Quality (AF4Q) initiative—the signature effort of the Robert Wood Johnson Foundation to improve the quality of health care in targeted communities.14, 15 AF4Q operates in 17 regions nationwide, with the goal of bringing together everyone who gets, gives, and pays for health care to improve the quality of care provided locally and to provide models for national reform.16 The AF4Q documents present general guidelines for reporting information in user friendly ways. In addition, they provide more focused guidelines for reporting specific kinds of information or reporting to specific audiences—for example, Language to Use in Public Reporting About Hospital Care,17 How to Describe the Health and Community Context for Comparative Performance Reports,18 and Communicating with Physicians about Performance Measurement.19

In addition to these two major sources, six other studies were also identified in the search used to update the Fung et al. review. Four are discussed below as they relate to the findings of the Best Practices in Public Reporting and the Aligning Forces for Quality 20-23 The final two are from the Andalusian Health Service24 and the German national hospital system;25 we did not include them in our synthesis because we restricted the evidence for these two questions to US data, since the findings are particularly sensitive to context.

How to Effectively Present Health Care Performance Data to Consumers

In this report, for the most part we will use the term, “consumers” and “patients” interchangeably, although “patients” may be construed as Veterans while “consumers” is a commonly used term in discussions of public reporting and includes patients and others, such as family members, who make health care decisions.

Giving consumers comparative performance information is part of an overall strategy to improve health care. Performance reporting has two basic underlying assumptions:

  • Consumers will use performance information to choose high-quality health care for themselves and their family members.
  • Consumer choices will collectively stimulate quality improvement among providers seeking to protect or improve their market share, or to protect or enhance their public reputations.

Designers of performance “report cards” face four major challenges:10, 11

  1. Consumers are not interested in report cards: they believe care is high quality and uniform across providers.
  2. Consumers and clinical experts define quality differently. Performance reports include both technical quality of care measures and patient experience measures. Consumers identify the latter, but not the former, as critical components of quality care.
  3. Quality measures are often hard to understand or are not meaningful to consumers. For example, hospital performance reports may use length of stay as an indicator of poor performance. But consumers may think longer length of stay indicates high quality—e.g., patients can stay as long as they need to get well. Other measures don't make sense to consumers—e.g., administration of beta blockers and angiotensin-converting enzyme (ACE) inhibitors. What consumers don't understand they ignore.
  4. Using quality information to inform choices is hard. Using a performance report to choose a provider requires consumers to process a great deal of information, identify the factors that they care about and weight them accordingly, and integrate all the factors into a choice. This process requires comparatively high level analytical skills and places a substantial cognitive burden on the users of public reports. Most people lack related skills and experiences.

Two articles identified in our searches support these findings.20, 22 Mazor and colleagues report that patients choosing a hospital are more likely to rely on factors such as the prior experience of the consumer (95%), reputation of the hospital (93%), physician recommendation (92%), and insurance coverage (91%) than they are to use safe practice score (82%), infection rates (82%), and mortality rates (76%).20 Another study found that participants were most interested in having cardiac report cards provide information about the experiences of other cardiac patients.22

Practical Solutions for Designing Reports

Hibbard and Sofaer (2010)10 suggest multiple strategies for designing performance reports that consumers can, and will, actually use.

  1. Make the information in the report relevant to what consumers already understand.
    An overall definition of quality, couched in everyday language—for example, “care that does not cause harm”—can help consumers develop a broader view of quality. Using the components of the definition as the reporting categories can help consumers link the ratings to things they care about. Consumers also know that their own personal experiences with care vary. The hope is that “pairing information on the technical aspects of quality with patient experience data” will alert consumers to the importance of understanding what quality of care means.
    As in other areas, consumers prefer to have information from a trusted source. This means that reports should include information about who sponsored the report, how the information was gathered, and where additional details can be found.
  2. Make it easier for consumers to understand and use the information.
    Key techniques are summarizing and interpreting the data and highlighting meaning— for example, by labeling performance as “excellent” or “poor” or ranking ordering providers by performance. Cognitive signposts such as “best value” can help consumers to digest evaluations of multiple factors. Since about one-half of the population finds it difficult to interpret numbers, using symbols can be helpful—especially if the symbol conveys the meaning directly and helps consumers to identify a pattern. An example would be combining the word “below” with a downward pointing triangle.
    Such strategies help report users to bring diverse information together in a choice. The capabilities of the Web can be exploited to help consumers filter and customize information. Hibbard and Sofaer provide examples of these strategies, noting that strategies most helpful to consumers are often ones that providers resist—e.g., ordering providers by some specific, or summary, dimension of performance.
  3. Test the report with consumers to learn what does and doesn't work.
    Key techniques include asking individuals to explain in their own words what a label or symbol means. Giving users “assignments” such as finding the top three or bottom three performers reveals whether the information in the report is presented in a way that supports a choice. A recent experiment identified report features that consumers found most helpful:23 ordering by level of performance rather than alphabetical order, using meaningful symbols instead of numbers, providing an overall summary measure, and including fewer reporting categories.

Table 2 summarizes the practical design suggestions offered by Hibbard and Sofaer.

Table 2. Summary of Design Solutions for Performance Reports.

Table 2

Summary of Design Solutions for Performance Reports.

Mazor and colleagues found no statistical difference in consumers' ability to interpret the content of reports when key elements of how the information was presented were changed: consistency of hospital performance across indicators, presentation type, or presence of confidence intervals.20 Despite these variations, consumers were able to correctly interpret the data, with a “vast majority” of respondents correctly identifying hospitals with the best safety or infection scores. In another study by Mazor and colleagues, actual numerical scores and print reports, as opposed to symbols and online reports, were preferred in 59 qualitative interviews discussing consumer views of public reports on Health Care-Associated Infections.21

The AF4Q reports address the same display challenges but couch them as display goals, along with display strategies to achieve them.14, 15 Table 3 summarizes the goals and strategies.

Table 3. Goals of a Good Display of Comparative Information.

Table 3

Goals of a Good Display of Comparative Information.

The AF4Q reports provide guidance about how to implement each of these strategies and give “before” and “after” illustrations to demonstrate how the strategy may be applied.14, 15

Cost and Efficiency

Increasingly, cost data are being included in public performance reports. These data are often misinterpreted, especially since Americans tend to think that higher cost always translates to higher quality. Showing quality within cost strata or cost within quality strata may demonstrate that high quality care isn't necessarily the most expensive care.

Consumers are not accustomed to thinking about the efficiency of health care, and they may equate efficiency with cutting corners or saving money for their employer. Hibbard and Sofaer suggest some terminology that might help to clarify the concept of efficiency—e.g., “Uses health care dollars wisely”—but suggest additional testing is needed to determine what works best for consumers.10

Maximizing Consumer Understanding of Public Comparative Quality Reports: Effective Use of Explanatory Information

Having a set of provider performance measures and ratings does not make an effective public report. Sofaer and Hibbard (2010)11 identify explanatory information needed to accurately communicate quality ratings to consumers and motivate them to use the information to inform their health care decisions.

The report offers nine evidence-based recommendations and related examples:

  1. Engage and motivate consumers to explore and use reports.
    The first page of a report, whether in hard copy or online, should include key messages to motivate the user. For example, “A poor choice of provider can have serious consequences for your health and finances.”
  2. Deepen consumers' understanding of health care quality and quality measures.
    Provide a broad framework that defines different aspects of quality and helps consumers link what they care about to the more sophisticated quality measures presented in the report. Clearly state the purpose and value of the report.
  3. Legitimize the report's sponsor and the report's credibility.
    Consumers want to know who is issuing the report and why, whether the report's ratings are fair, and how the performance scores were generated. Technical details should be accessible but most consumers won't consult them.
  4. Provide information about the importance, meaning, and interpretation of specific measures.
    Measures should be described and interpreted in everyday language; different types of measures—e.g., patient experience measures versus outcome measures such as patient safety or mortality—will need to be explained. Consumers may need guidance about what to look for in a graph.
  5. Help consumers understand the implications of resource use information.
    The term “resource use” has not been tested with consumers, so it is not clear how they interpret it. Two general beliefs are barriers to appropriate interpretation: the belief that more care is better, and the belief that cost reflects quality.
  6. Help consumers avoid common pitfalls that lead to misinterpretation of quality data.
    Consumers need to understand that providers should not be compared on certain measures—e.g., very rare events, and that a provider's overall performance can't be assessed from a limited set of measures that reflect only part of the provider's services.
  7. Provide consumers guidance and support in using the information.
    Approaches to providing decision support include giving consumers a list of what they should think about in choosing a health care provider—for example, does the provider speak a language other than English, how easy is it to make an appointment, is the provider's office conveniently located for the consumer. A label or symbol can help consumers summarize scores—for example, “Best Value.” Key differences in performance can be highlighted. Stories and testimonials can demonstrate how health information can be used. Reports should also inform consumers what they can do to protect themselves from poor quality care since some report users will not have a choice of providers.
  8. Provide access to more detailed information.
    Web-based reports make it easier to balance ease of use with access to details since consumers can drill down for more information on topics of special interest.
  9. Test the report with consumers before going live.
    Cognitive interviews are the gold standard for testing surveys and can help guide development and revision of the report.

How to Maximize Public Awareness and Use of Comparative Quality Reports through Effective Promotion and Dissemination Strategies

If consumers do not know about publicly available performance reports, they cannot use them. As a result, report sponsors will have no return on what is often a substantial investment in creating the report. Unfortunately, few sponsors have been completely successful in disseminating information about their reports, whether web-based or print, and little research has been conducted about how to effectively promote and disseminate performance information.

Drawing on insights from social marketing and web marketing, Sofaer and Hibbard (2010)12 suggest 10 ways in which report sponsors can promote public awareness and use of comparative quality reports.

  1. Plan from the outset of the project to promote and disseminate the report. Dissemination should not be an afterthought.
  2. Identify the main audience as early as possible since the nature of the audience drives many other choices. An important secondary audience comprises those who are being rated. They should receive the report before it goes public.
  3. Engage those who can provide information about the nature of the audience and how best to reach them. Consumer and patient advocacy groups can play key roles.
  4. Use the insights of social marketing. These include paying careful attention to developing the key messages for promoting the report. In general, people respond better to messages telling them how to protect themselves than they do to messages about how to find the “best” provider.
  5. Be strategic about timing the report's release. Few people will be making a provider choice at the time the report appears, so audiences need to be reminded frequently that the report exists and how to access it.
  6. Be strategic about positioning. Identify the places that the key audience(s) go to find health information and the kinds of sites or locations that they are likely to access and trust.
  7. Work actively with the media to promote the report. Relationships with the media should be built early in the project. Guidelines for interacting with the media will help promote a consistent message.
  8. Use advertising to promote the report. Advertising can reach both broad and specific populations.
  9. Use outreach to promote the report and facilitate its use. Work with organizations who have an ongoing relationship with your audience(s) to give the report visibility. Public libraries also offer possibilities for promoting and disseminating the report.
  10. Gather and analyze feedback on the report and its dissemination. Web surveys and focus groups are just two ways of gathering feedback, which can help inform future reporting efforts.

KEY QUESTION #3. What is the evidence that patients or their families use publicly reported quality and safety information to make informed health care decisions?

The evidence in this section comes from three sources: the review by Fung and colleagues,5 a newer review by Faber and colleagues specific to consumer's use of quality of care information,26 and studies not included in either review (see Table 4) that were identified in our search. Articles already summarized in the prior reviews are not necessarily individually discussed.

Table 4. Key Question #3 Article Overlap.

Table 4

Key Question #3 Article Overlap.

Evidence from a Systematic Review by Fung and colleagues

The systematic review by Fung and colleagues addresses key question three in their discussion of selection of health plans and hospitals.5 This review scored a 10/11 using the AMSTAR grading criteria for systematic reviews (see Appendix F). Within a conceptual framework for quality improvement developed by Berwick and colleagues, selection is one of two pathways in which public reporting can improve performance (See Figure 2, page 9).3 As opposed to the change pathway, in which providers are both the subjects and consumers of the public reporting, the selection pathway is focused on how patients and their intermediaries use publicly reported data in their decision-making process. Because the scope of the current ESP review excludes individual providers, the most applicable findings from this review are those that address the selection of health plans and hospitals.

Fung and colleagues found eight studies, all published after 1999, that addressed the effects of public reporting on selection of health plans. Two randomized, controlled trials using CAHPS survey data in Medicaid beneficiaries' plan selection found no effect on overall selection.37, 38 However, the analysis did detect an effect in a subgroup who chose an HMO with dominant market share:37 The participants who read the report selected higher scoring plans compared with the control group. Another two studies using hypothetical performance ratings found that consumers were willing to accept access restrictions or less generous coverage if included providers had higher quality or ratings.39, 49

The other four studies in this section used longitudinal observational data and econometric models. Two found that higher scoring plans were chosen more often by federal employees,30, 45 though employees overall did not switch plans.30 Employees of Harvard University were more likely to switch plans if they were enrolled with low scorers, as compared to those in higher-scoring plans.35 Finally, employees of General Motors were most affected by negative ratings, avoiding below-average plans but showing less discrimination with regards to superior ratings.43 Taken as a whole, the conclusions of these eight studies are mixed, but suggest that public reporting may have modest impact by encouraging people to avoid lower-ranked plans or weigh the benefits of more restricted, higher quality plans.

Nine studies indicated that, in general, selection of hospitals was not affected by publicly reported performance data. Two articles pre-dating 2000 reported on public reporting systems of the Health Care Financing Administration, now the Centers for Medicare and Medicaid Services. These studies found that the public release of hospital mortality rates had a small but statistically significant impact on utilization,52 but no statistically significant effect when comparing high-and low- mortality hospital occupancy.55 Another four studies examined the New York State Cardiac Surgery Reporting System (CSRS). Three studies all found that the NYS CSRS had little to no impact on market share.29, 36, 54 In contrast, the fourth study by Mukamel and Mushlin found higher market share growth rates for providers with better outcomes when compared to those with worse outcomes.51 The final three studies on hospital selection contributed to the evidence suggesting that public reporting has, at best, selective and short term effects,33 or, otherwise, little to no effect at all.4, 34

Evidence from a Systematic Review by Faber and Colleagues

In a 2009 systematic review that was specific to consumers' use of quality of care information, Faber and colleagues found 14 eligible studies.26 Of these, 10 assessed “laboratory experiments,” meaning studies of potential consumers making choices about hypothetical situations. The remaining four studies assessed actual “real world” public reports, and all of these were about CAHPS. Two of these studies were also included in the review by Fung and colleagues, as were two other “laboratory experiment” studies (see Table 3).37-39, 49 This report also scored 10/11 based on the AMSTAR criteria (see Appendix F). Overall, Faber et al. found that “patients often are unaware of the availability of the quality information.”26 Even if the data are identified, consumers “have difficulties in understanding the information,” do not view it as useful, and do not use it in their decision-making process. Studies examining consumer attitudes towards publicly reported data found that consumers were very interested in quality of care information. However, this interest does not translate into actual use. The percentage of consumers who were actually influenced by quality information was extremely low.

Evidence Not Included in Prior Reviews

We identified six studies in our literature search that examined consumer use of public reporting. Three of the studies relate to what factors influence patient use of publicly reported data; these have been discussed in the section for key questions one and two.20-22

Dixon and colleagues compared employees in one of three health plan options: a high-deductible consumer-directed health plan (CDHP), a lower deductible CDHP, and a preferred provider organization (PPO).27 The information-seeking behavior of the three plans varied at the outset, with lower-deductible CDHP enrollees being the most active before enrollment and the high-deductible CDHP enrollees using cost information more than those in the PPO. However, over the course of the study, the variation in information seeking between plans decreased. Given this shift towards uniformity, Dixon et al. note that other factors may be better indicators of information use, including enrollee characteristics.

In a cross-sectional time series study examining the New York State Cardiac Surgery Reporting System, Cutler and colleagues found that hospitals that had been flagged as high-mortality experienced a decline in coronary artery bypass surgery (CABG) cases, with a statistically significant decline in all patients in the first year.32 In both the first and second years, there was a statistically significant decrease in low severity patients, which suggests that hospitals are not simply declining high severity cases to lower their mortality rates. Hospitals with a low mortality ranking did not see statistically significant changes in their number of cases, which supports the notion that lower quality hospitals are more significantly impacted by public reporting than higher quality hospitals. The authors note that the observed changes could be attributable to multiple factors, and that patient decision making is only one such factor. Other demand-side factors such as referral patterns or supply-side factors such as poorly-rated surgeons exiting the market may also contribute to these findings.

In a complex economic analysis of survey data collected at the time of choice, Harris and colleagues found that some attributes of a report card and the survey can be related to actual plan choice.40 In other words, the authors offer the conclusion “we find evidence that consumers perceive quality and cost differences across health systems,” including such factors as distance to the closest provider, cost of the premium, and access to specialists and waiting times.

Summary of Findings

Conclusions from the studies of public reporting are mixed, but most studies found the use of publicly available data to be modest at best. Although consumers may show interest in public reports, in most cases interest does not seem to translate into actual use. The studies that do show use suggest that consumers may avoid low performers, but higher performers may not reap comparable positive benefits of public reporting.

KEY QUESTION #4. What is the evidence that public reporting of quality and safety information leads to improved quality or safety?

Result of Identified Studies

Fung and colleagues identified two groups of studies relevant to the question of whether public reporting leads to improved quality or safety. The first group addressed the question indirectly by examining the impact of public reporting on quality improvement activities; in the second group the outcomes related to public reporting are clinical changes or unintended consequences that are directly associated with quality and safety.

Impact on Quality Improvement Activity

In our update, we identified two new studies that measured whether public reporting affected the quantity of quality improvement activity at hospitals or other health care organizations.56, 57 The information about the 11 studies identified in the review by Fung and colleagues in which the quantity of quality improvement was the outcome is reproduced in the evidence tables (see Appendix E).

Wang and colleges,57 in a National Bureau of Economic Research Working Paper, assessed the effect of a “bad” report card (negative rating) on CABG surgery has on surgical volume for hospitals and surgeons. Only the hospital results are discussed here. No statistically significant overall effect was observed. However, one year after being identified as a high mortality hospital, there was a significantly significant drop in quarterly volume of 15 CABG procedures. This drop was primarily due to a decrease in low severity CABG cases.

All 11 studies from the Fung and colleagues review where the reported outcome was quality improvement activities were studies of hospitals; none were identified for health plans. The studies examined public reports of different health care quality data in several geographic areas.

Two studies of the QualityCounts program by Hibbard and colleagues4, 58 compared the hospitals that experienced public reporting to those that received confidential feedback (available only to the hospitals, not to the public) and others that received no data. They concluded that quality improvement increased in the areas associated with the indicators in the public reports and that hospitals with more quality improvement activities had higher performance scores. Three studies focused on public reporting of CABG surgery mortality in New York or Pennsylvania.36, 59, 60 These studies use case series, case studies, and surveys to document that hospitals responded to the public reporting of mortality data by improving programs,36 changing practice patterns59 and monitoring performance.60 Other studies documented implementation of quality improvement in Canadian hospitals following public reporting about care for acute MI;61 the responses of Cleveland hospitals to a regional reporting effort;62 and improvements following the release of the Missouri's Consumer Obstetrics Report Card.63

However, not all the identified studies found increases in quality improvement activities. Mannion64 identified cases in England where public reports discouraged improvement even though they were used by hospitals to tailor programs to national targets. Additionally, two studies of the California Hospital Outcomes Project (CHOP) documented limited impact.65, 66 In response to a survey, only three of 17 California public hospitals reported adding quality improvement activities due to CHOP.65 Hospital leaders who were surveyed reported that CHOP did not lead to changes in care for acute myocardial infarction, though some respondents did say they used CHOP to identify potential areas for improvement.

A more recent assessment of CHOP examined the impact of reporting on health plans and medical groups.56 This evaluation is available in the California office of the patients advocate's website. The study documents increasing use up through the last year data collection, 2004, with 28,000 visitors to the website and 100,000 booklets distributed in that year. Most users are interested in the comparing HMO performance in the “plan of service” domain, which includes items such as how quickly the plan handles complaints, getting patient needed care, and overall rating of service. Competitive information on prevention indicators were used less. Compared to those data from 1988 through 1990, the 2005 assessment found that 47% of medical groups and 13% of health plans were undertaking quality improvement activities in response to CHOP.

Impact on Clinical Outcomes

The second group of studies examines how public reporting affects clinical outcomes, including any unintended consequences. In our update we identified five relevant studies in addition to those included in the prior review. In the text below we first describe the newly identified studies in some detail, then summarize the articles include in the Fung review.

The newly identified articles include three about hospitals,32, 67, 68 one about health plans69 and one about ambulance services.70 All document that public reporting had a positive impact on the outcomes of interest.

The Consumer Assessment of Healthcare Providers and System (CAHPS) project is a US government-funded effort to collect and publicly report standardized survey data on patient experiences. Elliot and colleagues68 assess changes in response to the hospital version of CAHPS between 2008 and 2009—the first two years the data were publicly available (including 61% and 84% of US hospitals, respectively). They found small improvements (from 0.3 to 0.9%) in the mean percentage of patients selecting the most positive responses on 8 out 9 domains. The largest improvement was in “responsiveness of hospital staff’ while no improvement was found in “doctor communication.” Though small, the improvements were statistically significant and were sufficient to change a hospital's rank. The authors conclude the results suggest that improvement in these domains is possible and may be furthered by public reporting; however, ongoing analyses will be required to see if improvements continue over multiple years.

Cutler and colleagues32 added to a large literature on the New York State Cardiac Reporting System (CSRS) by conducting a time series analyses of mortality data from all New York hospitals performing bypass surgery. Their analyses examined changes in each hospital's mortality one year after the mortality rates were made public. They found that identification as a high-mortality hospital was associated with improved future performance. Specifically, the improvement in risk adjusted mortality was a statistically significant 1.2 percentage points lower over the 12 months following public reporting as a high-mortality hospital. This improvement persists for an additional 12 months. No significant improvement was found for hospitals that had low mortality rates at the time of the first report.

Kim and colleagues67 evaluated the impact of public reporting on caesarean rates at hospitals in South Korea, comparing rates before and after the public release of rates in 2000. Overall rates were 43.0% of all deliveries in 1999; 38.6% in 2000 and 39.6% in 2001. Hospitals that had higher caesarean rates in 1999 or did more deliveries were more likely to reduce rates; other organizational factors such as ownership and market share were not associated with decreases in caesarean rates for these years.

Hendriks69 and her coauthors report on the performance of Dutch health plans over four years (2005-2008) on consumer experience measures from a Dutch survey based on the CAHPS survey used in the US. Overall, health plans improved in four of seven domains: “general rating,” “conduct of employees,” “health plan information,” and “transparency on payment requirements.” In an analysis stratified by 2005 performance, plans scoring below average had larger improvements in 2008 scores than did plans scoring average or above average in 2005, across all seven domains. These changes were statistically significant in all domains except “getting the needed help from the call center.” The public reports included the data comparing plans as well as press releases that identified specific areas for improvement; however, improvement was not greater in areas publicly identified as needing attention.

Bevan and Hamblin70 assessed the impact of public reporting on the performance of ambulance services in Great Britain. All the countries in Great Britain had the same targets for ambulance response times, but only in England was the performance for each service included in published ‘star ratings’ showing whether services met the targets. The frequency with which services met the time targets for different types of calls were tracked from 2000 through 2005. In England, where performance was publicly reported, the percent of calls meeting the target increased. But the percent meeting the target remained low over the same period in Wales and Scotland; indeed performance would have been scored as failing if the English reporting system has been applied to their performance.

The authors conducted analyses to determine if the improvement in England could be attributed to “gaming” or poor data collection. However, even with adjustments for these factors, the improvement in the English services remained significantly better compared with the countries where performance was not publically reported.

These five additional studies supplement the evidence identified and summarized from 14 studies of hospitals and 2 of health plans by Fung et al. The majority (14 of 16) of the studies of hospitals are about two public reporting systems.

Ten of these studies examined the impact of New York State public reporting of mortality rates for cardiac surgery and percutaneous coronary interventions (PCI). Four studies found that mortality rates decreased after public reporting: in a case study of one hospital (6.6% declined to 1.8% );59 in all New York hospitals after risk-adjustment (4.17% declined to 2.34%).71 In New York hospitals for elderly patients, mortality declined at a rate faster than the national trend.72 After public reporting, mortality rates no longer differed across hospitals that had the highest, middle, and lowest rates before the public reporting program.54

Two studies did not find a link between public reporting and improvement. Ghali compared New York rates to Massachusetts, a state without public reporting, and found that the decrease in mortality was similar.73 A comparison of New York and Michigan found lower unadjusted rates for New York, but the difference was no longer significant when the rates were risk adjusted.74

Other studies of the New York public reporting system sought to determine if public reporting had unintended consequences on practice patterns, particularly the selection of patients for procedures. The studies came to different conclusions. One study comparing the case mix of NY and Michigan PCI patients found that high-risk patients in NY were less likely to receive PCI, perhaps because public reporting was encouraging selection of lower risk patients.74 Another study of mortality rates of New York patients at the Cleveland clinic suggested that the increase in mortality of these out-of-state patients is an indication that sicker patients from New York were referred out of state after public reporting.75 Dranove and colleague documented shifting of severely ill patients to teaching hospitals in New York and Pennsylvania after these states implemented public reporting.76 In contrast, the study by Peterson and colleagues looked for but found no evidence that access to coronary artery bypass surgery was restricted for elderly acute MI patients or for high- risk elderly.72

Four studies of the Cleveland Health Quality Choice (CHQC) program reported minimal positive impact from public reporting. Risk-adjusted mortality rates for conditions included in CHQC decreased according to one study,77 but a comparison of Cleveland to the rest of Ohio where there was no public reporting found that declines in mortality rates were similar.78 An analysis of outlier hospitals with high mortality rates found that they did not improve;34 a complementary study documented that some decreases in in-hospital mortality were offset by after-discharge mortality, resulting in no decline in 30-day mortality.79

The remaining two studies concerned other public reporting systems. The Missouri Department of Health issued a consumer report on obstetrics care and evaluation of outcomes over 5 years (1989 to 1994).63 The report found that hospitals with high rates of cesarean delivery and hospitals with low rates of vaginal birth after cesarean delivery had statistically significant improvements in performance, and rates of very low birth weight were reduced. Hibbard et al. compared hospitals in Wisconsin that were subject to public reporting to hospitals that received confidential feedback on performance or no data.4 They found that hospitals whose obstetric performance was low were more likely to improve if there was public reporting, and that public or confidential feedback was associated with improvement.

The review by Fung and colleagues also identified two studies assessing the potential effects of public versus private reporting of quality information. Both studies were retrospective cohorts. Bost found that health plans that voluntarily report performance data outperformed non-publicly reporting health plans,80 while McCormick and colleagues found that plans with lower quality of care scores were more likely than higher-scoring plans to drop out of public reports.81

Summary of Findings

We identified relatively few new studies within our scope in the peer reviewed literature during the five years since the search was conducted for Fung et al. Two of the newly identified studies addressed the impact of reporting on quality improvement activities. Some empirical evidence and the conclusion of the prior review support the theory that public reporting stimulates quality improvement activities. Five new studies identified address a variety of outcomes (patient or consumer experience, obtaining performance targets, rates of caesarean and mortality) and four of the five are national studies. All five conclude that public reporting has a positive impact on quality or safety outcomes; however, the effect was small and two studies were time series studies in a single country, where all providers were subject to public reporting and the change, each could have been due to other changes that impacted all providers.

This small and varied amount of additional evidence is not sufficient to change the conclusion of the Fung et al. review that “the effect of public reporting on effectiveness, safety, and patient-centeredness remains uncertain.” However, the CHOP assessment from 2005 provides some encouragement that this may be changing.

Quality of Evidence

For impact on quality improvement activities, only one study compared the number of quality improvement activities across hospitals that did and did not experience public reporting.58 The rest of the identified studies were case studies, case series, or used surveys or interviews to collect information on use of report cards and volume of quality improvement activities. These studies were rated 1 out 4 for study design and given the lowest global rating.

The studies of clinical outcomes and unintended consequences are more varied in terms of design and their weight in the overall body of evidence (global rating). However, the majority make moderate contributions to the weight of evidence and are time series or designs that include multivariate adjustment (3 out of 4 on the rating of study designs).