NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Walsh J, McDonald KM, Shojania KG, et al. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care). Rockville (MD): Agency for Healthcare Research and Quality (US); 2005 Jan. (Technical Reviews, No. 9.3.)

Cover of Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care)

Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care).

Show details

2Methods

Methods common to the reviews of quality improvement strategies for the various topics and disorders are presented in first volume of the Closing the Quality Gap series. The authors provide additional detail in this section, as it pertains to the review of quality improvement strategies for hypertension.

Types of Quality Improvement Strategies

A variety of interventions have been tested with the goal of improving the quality of care for common clinical conditions. The conceptual framework developed for classifying quality improvement strategies has been on the scientific literature (See Closing the Quality Gap, Volume 1). These interventions can target organizations, providers, communities, or individual patients, and have been evaluated in a wide variety of formats. The reviewers have classified the range of interventions into nine broad strategies, according to the following taxonomy (also see, Volume 1):

1.

Provider reminders—Information tied to a specific clinical encounter, provided verbally, in writing, or by computer, and intended to prompt the clinician to recall information (e.g., to make medication adjustments or order appropriate screening tests), or to consider performing a specific process of care. The phrase “tied to a specific clinical encounter” serves to distinguish reminder systems from audit and feedback, where clinicians are typically presented with summaries of their performance relative to a process or outcome of care over multiple encounters.

2.

Facilitated relay of clinical data to providers—Clinical information collected directly from patients and relayed to the provider where the data are not generally collected during a patient visit, or using some format other than the existing local medical record system (e.g., transmission of a patient's home blood pressure measurements). The researchers expected some overlap with provider reminder systems, but kept these strategies separate at the abstraction stage, to allow for the possibility that the data could be analyzed subsequently with and without collapsing the two strategies.

3.

Audit and feedback—Any summary of clinical performance of health care providers or institutions that is reported either publicly or confidentially, to or about the clinician or institution (e.g., the percentage of a provider's patients who have achieved or have not achieved some clinical target). Benchmarking refers to the provision of performance data from institutions or providers regarded as leaders in the field. The investigators included benchmarking as a type of audit and feedback, so long as local data were provided in addition to the benchmarks.

4.

Provider education—Any intervention that included one of the following three substrategies: educational workshops, meetings (e.g., traditional Continuing Medical Education [CME]), and lectures (live or computer-based); educational outreach visits (use of a trained person who met with providers in their practice settings to disseminate information intended to change the provider's practice); or distribution of educational materials (e.g., published or printed recommendations for clinical care, including clinical practice guidelines, audio-visual materials and/or electronic publications).

5.

Patient education—In-person patient education, either individually or as a part of a group or community; distribution of printed or audio-visual educational materials. The investigators evaluated those strategies that included patient education as part of a multifaceted strategy, but excluded those in which patient education was the sole strategy. A future volume in this series will review the topic of patient education with reference to its effect on a variety of chronic diseases, including hypertension.

6.

Promotion of self-management—Distribution of materials (e.g., devices for blood pressure self-monitoring) or access to a resource that enhances the patients' ability to manage their condition, provision of clinical data back to the patient, or followup phone calls to make recommendations regarding adjustments to care. The reviewers expected some strategy overlap with patient education and patient reminders, but kept the strategies separate at the abstraction stage to allow for the possibility that the data could be analyzed subsequently with and without collapsing the strategies. Those strategies that included self-management as part of a multifaceted strategy were analyzed, but those in which self-management was the sole strategy were excluded. A future volume in this series will address the topic of self-management, along with patient education.

7.

Patient reminders—Any effort directed at encouraging patients to keep appointments or adhere to other aspects of self-care.

8.

Organizational change—Changes in the structure or delivery of clinical care designed to improve its efficiency or comprehensiveness. The investigators included changes such as the use of disease management or case management (coordination of assessment, treatment, and arrangement for referrals by a person or multidisciplinary team in collaboration with or supplementary to the primary care provider), other team or personnel changes, use of telemedicine (communication and case discussion between distant health professionals), Total Quality Management (TQM) or Continuous Quality Improvement (CQI) (i.e., cycles of measurement of quality problems, design of interventions, implementation, and re-measurement), and changes in medical records systems or hospital information systems. Among studies that included organization change as one of their QI strategies, three substrategies (disease/case management, team/staffing changes, and medical records changes) also were extracted for analysis.

9.

Financial, regulatory, or legislative incentives—Interventions providing positive or negative financial incentives directed at providers (e.g., linked to adherence to some process of care or achievement of some target patient outcome), positive or negative financial incentives directed at patients, system-wide changes in reimbursement (e.g., capitation, prospective payment, shift from fee-for-service to salary), changes to provider licensure requirements, or changes to institutional accreditation requirements.

In addition to the aforementioned QI strategies, the reviewers planned initially to abstract data on intervention features, such as use of social influence (e.g., local opinion leaders36, 37), involvement of top management, designing the intervention based on a theory of behavior or organizational change,3840 and other potential “mediators” of intervention success.41 The identified studies, however, rarely explored these and other potentially relevant features of intervention design.42 Similarly, few studies considered organizational context43 and local attitudes and beliefs,44 so these potential predictors of intervention success or failure were deleted from the structured questions on the abstraction forms and from the analysis.

The single “mediator” reported with sufficient frequency and detail was the use of clinical information systems, identified as a potential predictor of success in a previous review.45, 46 Reviewers were asked to indicate whether a clinical information system played a role in the design or implementation of the intervention (regardless of QI strategy type). The potential roles identified in structured form were: identification and/or group allocation of eligible patients or providers; reminders generated by an existing clinical information system; decision support at point of care; facilitated communication between providers (e.g., e-mail communications between members of a care team); and audit data gathered from a clinical information system to facilitate a QI strategy (e.g., audit and feedback, TQM, provider education, financial incentives).

Scope

In keeping with the goal of reviewing quality improvement strategies, numerous categories of studies were necessarily excluded or included. The investigators defined the scope of the project in terms of: hypertension type in the study population; targeted patient subpopulations; steps in the pathway to hypertension control; studies excluded for their failure to address QI; study design; and year of publication.

Hypertension type—To review studies that were most applicable to the general population, the researchers focused on primary hypertension in non-pregnant adults. Also known as “essential” hypertension, primary hypertension comprises at least 90 percent of all cases, and is of unknown etiology. Since identifiable secondary causes (such as renal artery stenosis) account for only a small number of cases (and management of these cases is less guideline-directed), this report focuses on primary hypertension. Studies that failed to specifically state that they omitted cases of secondary hypertension from the study population were not excluded. However, as management of hypertension in children and in pregnant women differs markedly from management of essential hypertension in non-pregnant adults, studies of these subpopulations of hypertensives were excluded from this review.

Targeted patient subpopulations—Many hypertensive patients have comorbidities such as diabetes mellitus and cardiovascular disease. These populations were included, as they are a sizable segment of the overall hypertensive population in the United States. Studies focused primarily on a smaller, specialized subpopulation (e.g., patients with alcoholism) were excluded.

Steps in the pathway to hypertension control—The causal pathway to hypertension control involves numerous steps. Studies that reported solely on interventions related to patient or provider knowledge at the earliest stages of the pathway were excluded. Patient and provider knowledge were considered only in conjunction with reports involving clinically relevant outcomes measures, such as blood pressure control.

Studies excluded for their failure to address QI—The primary QI targets were patient and provider adherence to recommendations, and the identification and effective control of hypertension (these will be described in full, later in this chapter). Equivalence studies (i.e., two patient groups that received either a physician-managed or a nurse-managed intervention to determine whether the two were equivalent) were excluded, as were studies that focused solely on patient satisfaction, costs, or resource use. In addition, studies of efficacy trials for particular blood pressure control interventions, and studies designed primarily to assess the efficacy of certain medications in lowering blood pressure, particular life-style changes (e.g., salt restriction diets, stress reduction) and particular technical innovations (e.g., one home blood pressure monitor compared with another) were excluded. The same is true of studies that relied on provider self-report as a measure of provider adherence to recommendations.

Study design. Study designs other than randomized controlled trials (RCT), quasi-RCTs, controlled before-after (CBA), or interrupted time series (i.e., studies using a design below Level 2) were excluded. (For more on study designs, see in Table 2, in Volume 1 of this series.)

Table 2a. Quality improvement strategies.

Table 2a

Quality improvement strategies.

Year of publication—Studies published prior to 1980 were excluded.

Literature Search and Review Process

Our search strategy began with a broad electronic search of the MEDLINE® database from January 1966 to July 2003, which yielded a total of 3070 citations (Figure 1). The search strategy is shown in Appendix A. We supplemented these results with a search of the Cochrane Collaboration's Effective Practice and Organisation of Care (EPOC) database,47 which includes the results of EMBASE and CINAHL® searches in addition to MEDLINE and extensive hand searching. Searching the EPOC database produced an additional 82 articles deemed relevant for full abstraction. Manual review of reference lists from retrieved articles, including prior systematic reviews and seminal articles24, 48 70 in the field, yielded an additional 13 articles (Figure 1). 48, 7182 Core investigators reviewed all of the resulting abstracts. A total of 359 articles merited full-text review (performed by two independent reviewers); at this stage, we abstracted basic information on study design, quality improvement strategy, and types of outcomes (Figure 1, and Appendix B). To meet the criteria for full abstraction, articles were required to experimentally assess the effect of a quality improvement strategy on hypertension detection, hypertension control, provider adherence, or patient adherence in adults. Articles excluded at this review stage appear in Appendix C.

Figure 1. Search strategy and article triage.

Figure

Figure 1. Search strategy and article triage. Figure 1 Legend EPOC-Cochrane Effective Practice and Organization of Care database, which is described in Chapter 2, contains the results of extensive electronic searches (more...)

Terminology to Distinguish Studies, Interventions, and Comparisons

Because the articles we reviewed did not have a uniform structure in the presentation of study data, we adopted the following terminology to describe the quality improvement interventions we reviewed for this volume:

  • When a single study led to multiple publications (articles) describing different aspects of the study, (e.g., a methods article followed later by a results paper, or several results papers) each publication was separately identified, but we reviewed all articles for the same study together.
  • A single study may include several different study arms (groups of subjects), with different QI interventions provided to the subjects in each study arm. These are often reported in a single published article. For purposes of analysis, we considered each intervention that was studied in comparison to a control group as a separate comparison. For example, a single study with one control group and three different QI intervention arms receiving different interventions (e.g., provider education and organizational change in one arm, patient reminder and organizational change in another arm, audit and feedback in a third arm), each compared with the control group, was considered (e.g., was listed in the Tables), as three comparisons. When an article reported several comparisons, we abstracted the data for each comparison separately.
  • The intervention described in a particular study may be multifaceted, that is, may involve more than one QI strategy. For example, the intervention may consist of a combination of provider education and provider reminders. A multifaceted intervention applied to a single study arm in comparison with control constituted a single comparison.

Outcome Measures

There are several important categories of outcomes relevant to the detection and management of hypertension. These include:

  • measures of disease identification
  • measures of disease control
  • measures of provider adherence to recommendations
  • measures of patient adherence to recommendations

Measures of Disease Identification and Initial Followup

We reviewed studies of screening for hypertension as a measure of disease identification. Screening for high blood pressure can occur in several settings. These include the community, the work place, and the clinician's office. Important outcomes with respect to disease identification include:

  • percentage of individuals who had their blood pressure measured
  • percentage of individuals found to have elevated blood pressure
  • percentage of individuals with elevated blood pressure who received follow-up
  • percentage of individuals screened who knew whether or not their blood pressure was elevated.

Measures of Disease Control

We reviewed several measures of disease control outcomes:

  • all-cause mortality
  • CVD or CHD mortality
  • mean or median SBP
  • mean or median change in SBP
  • mean or median DBP
  • mean or median change in DBP
  • percentage of patients achieving blood pressure within a target blood pressure range
  • percentage of patients with improved blood pressure control

We also abstracted a description of other measures of disease control used as outcomes in individual studies. We considered quality measures related to control of blood pressure to be the most important of the frequently used measures by which to evaluate quality improvement strategies for hypertension management. Morbidity and mortality measures represent the ideal measures, as they are the ultimate outcomes of interest; however, most studies did not report them.

Measures of Provider Adherence

Several measures of provider adherence were analyzed. We defined providers as the health professionals reported in the studies, including physicians, nurses, and pharmacists. We accepted the target or recommended practice as reported in the study. It must be noted, however, since recommendations have changed over time, some of the targets or recommendations no longer comport with current guidelines. Measures evaluated included:

  • adherence to guideline-specified targets for blood pressure
  • adherence to guidelines for the evaluation of patients with hypertension
  • adherence to specific medication recommendations
  • adherence to recommendations to improve patient medication adherence
  • adherence to guidelines for checking and/or recording blood pressure
  • adherence to guidelines for patient counseling or delivering patient education

Measures of Patient Adherence

We analyzed the following outcome measures of patient adherence to recommendations:

  • medication adherence (determined by self-report, pill counts, or pharmacy records)
  • adherence to follow-up appointments

Other Outcome Measures

Additional outcome measures reported in one or more studies that do not fit the categories above are shown in Appendix D.

Analysis

As described in Chapter 2 of this Volume, although we sought to conduct quantitative analysis of the included studies, many studies did not provide sufficient information to be analyzed in this fashion. Where feasible, we conducted a series of analyses, from descriptive summaries to comparative statistical analyses and exploratory regression analyses.

Median Effects Calculations

To take account of differences at baseline between intervention and control groups, we computed the outcomes for each study as the net change from pre-intervention to post-intervention between study and control groups:

Net Change = (Post-intervention - Pre-intervention)Study group - (Post-intervention - Pre-intervention )Control group

Following the method employed in a recent systematic review of strategies for guideline implementation,83 the median of the calculated net change for studies reporting the same outcome was then reported for each analysis, and termed the “median effect.” For example, if a study reported the mean SBP for the control and intervention arms before and after the QI intervention, we calculated a net change in mmHg. We constructed each outcome measure so that a positive result indicated an improvement; for example, a lowering of SBP or an increase in the percentage of patients receiving guideline-recommended drugs were recorded as positive results. We report the median and inter-quartile (IQ) range for all studies reporting each outcome.

Because many different outcomes were used to measure provider adherence, we were unable to perform quantitative analysis of the effect of QI strategies on individual adherence outcomes. We therefore developed a summary measure of adherence for each study that reported more than one adherence outcome. For example, if a study reported three adherence outcomes that required (1) appropriate choice of medication (e.g., diuretics or β-blockers as first-line therapy for uncomplicated hypertension), (2) decrease in inappropriate choice of medication (e.g., calcium channel blocker for patients without specific indication), and (3) appropriate patient education (e.g., salt restriction, exercise counseling, smoking cessation), we calculated an effect size for each outcome, ranked the effect sizes, and used the outcome with the median effect size as the summary adherence measure for that study.

Study Sample Size and Publication Bias

Publication bias refers to the overestimation of effect size due to preferential reporting of positive studies, particularly with smaller, poor quality studies,8490 and is of particular concern for quality improvement studies. Given the lack of a single, well-established analytic method for detecting or adjusting for the effects of publication bias,9193 conducting a thorough search for unpublished research represents the preferred approach to avoiding this potentially large source of bias.84, 94, 95 However, in the area of quality improvement, relevant research may be more likely to be conducted by personnel charged with quality assurance activities as part of their job descriptions, with emphasis placed on measures of success rather than research dissemination. Consequently, the incentive to publish evaluations may be particularly low when the result is negative. Further, there is not an efficient means to find these studies.

The difficulty in obtaining unpublished QI trials and the accompanying susceptibility to publication bias led us to analyze the studies in terms of sample sizes. Focusing on median effects, as described above, rather than average effects, avoids skewing summary measures based on one or two outliers with particularly large or small effects. We then examined the median effect sizes by different strata of study sample size, e.g., comparing the median effect among studies with sample sizes in the lowest quartile versus those in the highest quartile, or the lower half compared with the upper half. Strata were defined for studies reporting SBP and DBP outcomes separately, so that a study was assigned a study size quartile for comparisons with other studies reporting the same outcome.

For studies where the unit of analysis and unit or randomization differed, we also conducted analyses of “effective sample size.” Briefly, we adjusted for “clustering” (e.g., providers or clinics were randomized and patient outcomes for the providers or clinics used for ascertainment of the outcome). We performed this adjustment using estimated values for the intra-cluster coefficient (ICC).1, 96 We also addressed the possibility that studies of other sub-optimal design might contribute overly-optimistic outcomes by analyzing study design features, as described in the next section.

Study Design Quality

We reviewed studies for five aspects of study design quality. We considered studies to be of higher quality if the design included the following features: randomized allocation of intervention, providers blinded to study group assignment, patients blinded to study group assignment, unit of analysis same as unit of allocation to treatment, and concealment of allocation. We report pooled study outcomes for studies with and without each of these design features.

Statistical Analyses

Where possible, we performed a simple non-parametric test for differences between median effects— the Mann-Whitney rank-sum test.104 Such analyses were possible only for mutually exclusive categories, such as randomized versus non-randomized trials, or for all interventions with a given QI strategy, compared with those without this strategy. We could not, however, compare one strategy to another because of the frequent overlap between types of strategies, with the same interventions contributing to multiple median effect sizes.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...