NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Uhlig K, Balk EM, Patel K, et al. Self-Measured Blood Pressure Monitoring: Comparative Effectiveness [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Jan. (Comparative Effectiveness Reviews, No. 45.)

Cover of Self-Measured Blood Pressure Monitoring: Comparative Effectiveness

Self-Measured Blood Pressure Monitoring: Comparative Effectiveness [Internet].

Show details

Methods

The present Comparative Effectiveness Review (CER) evaluates the effects of self-measured blood pressure (SMBP) monitoring in hypertensive patients. The Evidence-based Practice Center (EPC) reviewed the existing body of evidence on the effects of SMBP on clinical, surrogate, and intermediate outcomes in the management of hypertension. The CER is based on a systematic review of the published scientific literature using established methodologies as outlined in the Agency for Healthcare Research and Quality's (AHRQ) Methods Guide for Comparative Effectiveness Reviews (Agency for Healthcare Research and Quality. Methods Guide for Comparative Effectiveness Reviews [posted November 2008]. Rockville, MD.), which is available at: http://effectivehealth care.ahrq.gov/healthInfo.cfm?infotype=rr&ProcessID=60.

AHRQ Task Order Officer

The AHRQ Task Order Officer (TOO) was responsible for overseeing all aspects of this project. The TOO facilitated a common understanding among all parties involved in the project, resolved ambiguities, and fielded all EPC queries regarding the scope and processes of the project. The TOO and other staff at AHRQ reviewed the report for consistency, clarity, and to ensure that it conforms to AHRQ standards.

External Expert Input

During a topic refinement phase, the initial questions that had previously been nominated for this report were refined with input from a panel of Key Informants. Key Informants included experts in hypertension, general internal medicine, pediatrics, and cardiology, as well as representatives from both New York State and New York City Medicaid, and the TOO. After a public review of the proposed Key Questions, the clinical experts were reconvened to form the Technical Expert Panel (TEP), which served in an advisory capacity to help refine Key Questions, identify important issues, and define parameters for the review of evidence. Discussions among the EPC, TOO, and Key Informants, and, subsequently, the TEP occurred during a series of teleconferences and via email. In addition, input from the TEP was sought during compilation of the report when questions arose about the scope of the review.

Key Questions

Key Questions were further refined in cooperation with the TEP and take into account the patient populations, interventions, comparators, outcomes, and study designs (PICOD) that are clinically relevant for the use of SMBP in hypertensive patients. Five Key Questions are addressed in the present report. Four pertain to outcomes in patients using SMBP devices (Key Questions 1–4); and one addresses associations between patient factors and adherence with SMBP monitoring (Key Question 5). The Key Questions are listed at the end of the Introduction.

Analytic Framework

To guide the development of the Key Questions for the evaluation of SMBP, we developed an analytic framework (Figure 2) that maps the specific linkages associating the populations of interest, the interventions, and the outcomes of interest (intermediate outcomes, surrogate outcomes, and clinical outcomes). Specifically, this analytic framework depicts the chain of logic that evidence must support to link the interventions to improved health outcomes.

This figure depicts the key questions within the context of the PICO described in the previous section. In general, the figure illustrates how use of SMBP monitoring may result in changes in surrogate outcomes (cardiac measures), intermediate outcomes (blood pressure control, adherence with antihypertensive treatment, and health care process measures), and clinical outcomes (mortality, cardiovascular events, patient satisfaction, quality of life, and adverse events related to hypertension treatment). Additional support and different SMBP devices may impact on the effects of SMBP monitoring. The effect of SMBP monitoring may also be related to adherence to the monitoring. The five key questions are mapped across these various factors.

Figure 2

Analytic framework for evaluation of SMBP monitoring. AE = adverse events; BP = blood pressure; CVD = cardiovascular disease; KQ = Key Question; LVH = left ventricular hypertrophy; LVM = left ventricular mass; LVMI = left ventricular mass index; SMBP (more...)

Literature Search

We conducted literature searches of studies in MEDLINE® (inception–July 19, 2011) and both the Cochrane Central Trials Registry®, and Cochrane Database of Systematic Reviews® (through 2nd Quarter, 2011). All studies, regardless of language and study participant age, were screened to identify articles relevant to each Key Question. Our search included terms for self-measurement, home measurement, telemonitoring, self-care, and relevant research designs (see Appendix A for complete search strings). We also reviewed the reference lists from recently published systematic reviews for potentially eligible studies. In addition, articles suggested by TEP members were screened for eligibility using the same criteria as for the original articles.

We also conducted a focused grey literature search to find unpublished or non-peer-reviewed data, in particular the Food and Drug Administration 510(k) database and abstracts from recent relevant scientific meetings of professional societies. We searched the Food and Drug Administration 510(k) database (www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm) for all listed blood pressure (BP) measurement systems with Product Code DXN in February 2011. We limited the search to products that received approval since 1976. With the assistance of the TEP, we also compiled a list of professional organization meetings that were most likely to have published oral presentations and poster abstracts on hypertension management. Based on this list we retrieved and screened abstracts from conferences in 2009 through March 2011 from the American College of Cardiology (published in the Journal of the American College of Cardiology), the American Heart Association (published in Circulation), the American Heart Association High Blood Pressure Council (published in Hypertension), the American Society of Hypertension (published in the Journal of Clinical Hypertension), and the European Society of Hypertension (published in the Journal of Hypertension). We used the same eligibility criteria as for the full-text articles. In addition, we searched for ongoing research on SMBP in the Clinicaltrials.gov registry on March 21, 2011 to identify observational and interventional studies of SMBP. We used the terms [blood pressure OR hypertension] as a “condition” search string combined with the following search terms for interventions [(home OR ambulatory OR self) AND (monitor* OR telemonitoring OR measure* OR manage*)]. Protocols of retrieved entries were reviewed for use of interventions and outcomes relevant to the Key Questions of the current CER. Protocols of relevant studies were tabulated.

An effort was made to collect information on accreditation of the devices used in studies that ultimately met eligibility criteria. When the information was not reported in the study reports, relevant references in the articles were checked first. Next, when necessary, a search of grey literature was conducted by searching the device name in Google, PubMed, manufacturer or company Web sites, and the FDA database. For each device, findings were tabulated according to the accreditation criteria of the British Hypertension Society, American Association of Medical Instruments, and European Society of Hypertension.

An attempt was made to supplement the literature search by solicited Scientific Information Packets. A sister organization, also under contract with AHRQ, solicited industry stakeholders, professional societies, and other interested researchers for research relevant to the Key Questions. However, we received no Scientific Information Packets.

Study Selection and Eligibility Criteria

The EPC has developed a computerized screening program, Abstrackr, to automate the screening of abstracts to include eligible articles for full-text screening.35 The program uses an active learning algorithm to screen for articles most relevant to the key questions. Relevance was established by manually double-screening 250 abstracts to train the program. Subsequently, abstracts selected by the program were screened by one researcher. The results of each group of abstracts that were manually screened (and classified as accept or reject) were iteratively fed into the program for further training prior to generation of the next group of abstracts to be manually screened. This process continued until the program was left with only abstracts it rejected. In addition, abstracts tagged “reject” by a researcher were rescreened by a second researcher. Any abstract tagged as “accept” by either researcher was considered an accepted abstract. Using Abstrackr, we reduced by 40 percent the number of abstracts we needed to manually screen prior to starting the subsequent steps of the systematic review. While the review was subsequently being conducted, all abstracts rejected by the program were also manually screened. (All abstracts rejected by Abstrackr were also rejected by manual screening.) Full text articles were retrieved for all potentially relevant articles. These were rescreened for eligibility. The reasons for excluding these articles were tabulated in Appendix B.

Eligible studies were further segregated using the following selection criteria: population and condition of interest; interventions, predictors, and comparators of interest; outcomes of interest; study designs; and duration of follow-up.

Population and Condition of Interest

We included studies conducted in both adults (≥18 years) and children with hypertension, Hypertension in adults is generally defined as an untreated (or pretreatment) BP >140/90 mmHg.1 In children, it is generally defined as either a BP above a cut-off for age, sex and height reference. We allowed any clinically reasonable definition of hypertension, including existing treatment with antihypertensive medications. By consensus with the TEP, we excluded studies in which participants were on dialysis or had gestational hypertension. Hypertension in these special populations has a different pathophysiology, different duration, and different outcomes of interest. We also excluded studies where SMBP was part of a comprehensive disease management for heart failure or for weight loss, regardless of the presence of hypertension.

Interventions, Predictors, and Comparators of Interest

SMBP Monitoring (All Key Questions)

We included only SMBP upper arm monitors and excluded wrist monitors except in cases where they were used as a default for selected patients with large arm circumference. All varieties of SMBP monitors (manual, semiautomated, automated) were included. We included all monitors, regardless of whether they have been accredited or validated, or whether they are commercially available. We excluded studies where self measurement was not undertaken at home, for example if the participant self measured in the clinic, office, pharmacy, or workplace. We allowed studies that used home measurement devices where the measurement was done by a family member or a companion of the patient. SMBP had to be used as a medical intervention, not solely as a measurement tool for a BP outcome (e.g., a trial of antihypertensive medications where the BP outcome was measured with SMBP). SMBP monitoring had to be conducted for at least 8 weeks.

Additional Support

We included studies of SMBP monitoring with (or without) any type of additional support. Studies of additional support had to include at least one group who used SMBP monitoring. The study abstract and/or title must have suggested that SMBP monitoring was used as a principle part of the intervention. We did not screen all studies of ancillary interventions to find those that happened to use SMBP monitoring. Additional support included but was not limited to educational training, reminders, nursing interventions, telemonitoring, algorithms for medication titration, and additional physician consultation.

Key Question 1 was limited to studies that compared SMBP monitoring (with or without additional support) to usual care (any office or clinic BP monitoring). From studies that included groups who used SMBP alone, SMBP with additional support, additional support alone, and usual care, we evaluated three comparisons for this Key Question: SMBP alone versus usual care; SMBP with additional support versus additional support alone; and SMBP with additional support versus usual care.

Key Question 2 was limited to studies that compared SMBP monitoring with additional support to either SMBP without additional support or SMBP with an alternative additional support.

Key Question 3 was limited to studies that compared SMBP monitoring (with or without additional support) with one SMBP device (or type of device, e.g., manual) with another SMBP device (or type of device, e.g., automated).

Key Question 4 included studies that evaluated any SMBP. We evaluated the effect of SMBP on BP control as a predictor of clinical and surrogate outcomes.

Key Question 5 included studies that addressed the outcome of adherence with any type of SMBP monitoring. A prerequisite was that studies had to evaluate adherence rates based on specific predictors. We included any predictors of adherence with SMBP monitoring, with a primary interest in patient factors (e.g., demographics, medical or comorbid conditions, care setting).

Outcomes of Interest

Key Questions 1–4

The outcomes of interest were classified into three categories: clinical outcomes (e.g., mortality and cardiovascular events), surrogate outcomes (e.g., left ventricular hypertrophy and left ventricular mass index), and intermediate outcomes (e.g., BP control and number and change of antihypertensive medications).

  • Clinical outcomes (Key Questions 1a, 2, 3, & 4)
    • Cardiovascular events (myocardial infarction, angina, congestive heart failure, stroke, transient ischemic attack, peripheral vascular disease diagnosis or events)
    • Cardiovascular mortality (as defined by studies)
    • All-cause mortality
    • Patient satisfaction (any measurement tool, including satisfaction specifically with SMBP device)
    • Quality of life
    • Adverse events related to treatment with antihypertensive agents (e.g., hypotensive episodes or orthostatic falls)
  • Surrogate outcomes (Key Questions 1b, 2, 3, & 4)
    • Cardiac measures
      • Left ventricular hypertrophy by echocardiography
      • Left ventricular mass by echocardiography
      • Left ventricular mass index by echocardiography
  • Intermediate outcomes (Key Questions 1b, 2, & 3)
    • BP control (also predictor in Key Question 4)
      • Achieving a predefined change in BP (e.g., systolic BP reduction by 10 mmHg) or a predefined threshold (e.g., systolic BP <140 mmHg)
      • Systolic and diastolic BP or mean arterial pressure which must be measured the same way in both groups. SMBP measured BP can be outcome only for Key Questions 2 & 3.

        Clinic or other measurement by a health care professional

        Ambulatory BP (as either mean wake or daytime, mean sleep or nighttime, or mean 24 hour BPs)

      • Number and dose of hypertension medications or number of medication changes
    N.B. We did not extract or analyze data regarding how the BP was measured (beyond whether it was clinic, self-measured, or ambulatory). We did not extract body position (seated, prone), mandated rest periods, which readings were discarded, or whether measurements were based on single readings or averages of multiple readings, or other such BP measurement protocols.
    • Adherence to hypertension treatment.
    • Health care process measures such health care encounters (visits or calls)
    • Not:
      • Diagnosis of hypertension
      • Diagnosis of white coat or masked hypertension
      • Diagnostic accuracy
  • Adherence with SMBP monitoring (Key Question 5)
    • Adherence (or compliance) with SMBP monitoring, including any measurements used by the studies

Eligible Study Designs

We included both published, peer-reviewed articles from the formal literature search and recent abstracts and other reports from the grey literature (unpublished and nonpeer-reviewed data), though abstracts were described only in the text and were not included in Summary Tables. We included articles in any language (and used Google Translate [http://translate.google.com] and consulted foreign-language-speaking colleagues, when necessary).

SMBP Monitoring (Key Questions 1–4)

We included all comparative studies, including randomized controlled trials (RCTs), quasi-RCTs, and nonrandomized prospective studies. We excluded retrospective longitudinal studies. Studies must have had at least 8 weeks of followup. There was no minimum sample size threshold.

Adherence (Key Question 5)

We included prospective or retrospective longitudinal studies that analyzed at least 100 adults or at least 10 children who used SMBP monitoring for at least 8 weeks. The sample size threshold for adult studies was chosen to allow for adequate statistical analysis of the predictors. A lower threshold was chosen for pediatric studies due to expected sparseness of studies in this population. Case-control studies were excluded. Studies must have evaluated adherence rates based on predictors (for example age group ≥65 versus <65 years old), not predictor values based on adherence (for example adherers were on average X years old and nonadherers were on average Y years old). We included both univariable and multivariable analyses.

Data Extraction and Summaries

Two articles were extracted simultaneously by all researchers for training. Subsequently, each study was extracted by one experienced methodologist. The extraction was reviewed and confirmed by at least one other methodologist. Data were extracted into customized forms in Microsoft Word, designed to capture all elements relevant to the Key Questions. Separate forms were used for questions related to SMBP outcomes (Key Questions 1–4), and adherence with SMBP (Key Question 5) (see Appendix C for the data extraction forms). The forms were tested on several studies and revised before commencement of full data extraction.

Items common to both forms included first author, year, country, sampling population, recruitment method, whether multicenter or not, enrollment years, funding source, study design, inclusion, and exclusion criteria, specific population characteristics including demographics such as age and sex, and BP. Both forms also included information on baseline medication use, additional interventions, and device accreditation.

For each outcome of interest, baseline, followup, and change from baseline data were extracted, including information of statistical significance. We either extracted data from all timepoints or, if a large number of timepoints were reported, selected those timepoints most common with other studies, and noted that other timepoint data are available. Adverse event data related to antihypertensive treatment or safety of treatment were extracted, if available.

For studies that reported analyses of predictors of adherence with SMBP (Key Question 5), full data were extracted for each reported predictor when analyses were performed from the perspective of the predictor (i.e., baseline age as a predictor of death, not the mean age of those who lived and died). All analyses (e.g., univariable and multivariable) were extracted.

Quality Assessment

We assessed the methodological quality of studies based on predefined criteria. We used a three-category grading system (A, B, or C) to denote the methodological quality of each study as described in the AHRQ methods guide.36 This grading system has been used in most of the previous evidence reports generated by the EPC. This system defines a generic grading scheme that is applicable to varying study designs including RCTs, nonrandomized comparative trials, cohort, and case-control studies. For RCTs, we primarily considered the methods used for randomization, allocation concealment, and blinding as well as the use of intention-to-treat analysis, the report of dropout rate, and the extent to which valid primary outcomes were described as well as clearly reported. For treatment studies, only RCTs could receive an A grade. Nonrandomized studies and prospective and retrospective cohort studies could be graded either B or C. For all studies, we used (as applicable): the report of eligibility criteria, the similarity of the comparative groups in terms of baseline characteristics and prognostic factors, the report of intention-to-treat analysis, crossovers between interventions, important differential loss to followup between the comparative groups or overall high loss to followup, and the validity and adequacy of the description of outcomes and results.

A (good). Quality A studies have the least bias, and their results are considered valid. They generally possess the following: a clear description of the population, setting, interventions, and comparison groups; appropriate measurement of outcomes; appropriate statistical and analytic methods and reporting; no reporting errors; clear reporting of dropouts and a dropout rate less than 20 percent dropout; and no obvious bias. For treatment studies, only RCTs may receive a grade of A.

B (fair/moderate). Quality B studies are susceptible to some bias, but not sufficiently to invalidate results. They do not meet all the criteria in category A due to some deficiencies, but none likely to introduce major bias. Quality B studies may be missing information, making it difficult to assess limitations and potential problems.

C (poor). Quality C studies have been adjudged to carry a significant risk of bias that may invalidate the reported findings. These studies have serious errors in design, analysis, or reporting and contain discrepancies in reporting or have large amounts of missing information.

Data Synthesis

We summarized all included studies in narrative form as well as in summary tables (see below) that condense the important features of the study populations, design, intervention, outcomes, and results. We divided study groups (or arms) into three categories: SMBP alone; SMBP and additional support: and control. For Key Question 1, we considered SMBP versus usual care. This included studies that compared SMBP alone versus usual care (or a reasonable variation of usual care), SMBP plus additional support versus usual care, and SMBP plus an additional support versus the same additional support. Thus, a study that compared SMBP plus an education program versus use of the education program alone was treated as a comparison of SMBP versus usual care (where the education program “cancels out”). In addition, in studies that included three or more groups (specifically either [1] SMBP alone, SMBP plus additional support, and control or [2] SMBP plus an additional support, SMBP plus a different additional support, and control), the direct comparisons of SMBP with control were treated as independent despite the reuse of the control. For Key Question 2, we considered both [1] SMBP plus additional support versus SMBP alone and [2] SMBP plus an additional support versus SMBP plus a different additional support. Again, we cancelled out additional supports that were used in both study groups (e.g., use of an educational leaflet) and allowed multiple comparisons with the same comparator group.

For Key Questions 1 to 4, which evaluate the effect of an intervention on intermediate and clinical outcomes, we performed DerSimonian & Laird random effects model meta-analyses of differences of continuous variables between interventions where there were at least three studies that were deemed to be sufficiently similar in population and had the same comparison of interventions and the same outcomes.37 In practice this meant that meta-analyses were restricted to the comparison of SMBP monitoring alone (with no additional support) versus usual care. We did not attempt to meta-analyze the SMBP with heterogeneous additional support versus control comparisons. For each specific BP outcome, we performed separate meta-analyses at specific timepoints (e.g., 3 months, 1 year), chosen based on available relevant data. All timepoints with reported data from each study were included in the forest plots.

We preferentially evaluated the net change BP (the difference between the change in BP from baseline between the intervention of interest and the control intervention). However, when the net change could not be calculated (or if the study used a crossover design), we assessed the difference between final BP measurements.

However, a large number of studies did not report full statistical analyses of the net change or difference of final values. Where sufficient data were reported, we calculated these values and estimated their confidence intervals (CI). These estimates were included in the summary tables and were used for meta-analyses. In the summary tables we include only the P-values reported by the studies (not estimated P-values). If a study reported an exact P-value for the difference, we calculated the CI based on the P-value. When necessary, standard errors of the differences were estimated from reported standard deviations (or standard errors) of baseline and/or final values. For parallel trials, we arbitrarily assumed a 50 percent correlation of baseline and final values in patients receiving a given intervention. Likewise for crossover trials, we assumed a 50 percent correlation between final values after interventions (among the single cohort of patients). Thus in both cases we used the following equation to estimate the standard error (SE):

  • SE2difference = (SEA)2 + (SEB)2 − 2·r·(SEA)·(SEB)
  • where r=0.5 and A & B are the correlated values.

For each meta-analysis the statistical heterogeneity was assessed with the I2 statistic, which describes the percentage of variation across studies that is due to heterogeneity rather than chance.38,39

We performed two sets of sensitivity meta-analyses: first by including data from conference proceeding abstracts (which had not been included in the primary analyses)—this sensitivity analysis is presented in relevant forest plots (see Results); and second by excluding quality C studies to draw inferences from syntheses of quality A and B studies only—because this sensitivity analysis found no difference from the primary analysis, its results are described in the text only.

Evidence Tables

Evidence tables succinctly report measures of the main outcomes evaluated. The decision about which data to include in the evidence tables was made in consultation with the TEP. We included information regarding sampled population, country, study design, interventions, demographic information on age and sex, the study setting, number of subjects analyzed, dropout rate, and study quality. For continuous outcomes, we included the time point of ascertainment, the baseline values, the within-group changes (or final values for crossover studies), the net difference (or difference between final values) and its 95 percent CI and P-value. For categorical (dichotomous) outcomes, we report the time point of ascertainment, the number of events and total number of patients for each intervention and (usually) the risk difference and its 95 percent CI and P-value. If results were given for several timepoints, we included the longest timepoint up to and including 1 year as well as the longest timepoint beyond 1 year. If adjusted results were provided, we preferentially included these in the evidence tables and the meta-analyses, noting covariates for adjustment.

Each set of tables includes a study and patient characteristics table (which is organized in alphabetical order by first author). Results are presented in separate evidence tables for each outcome. Within these tables, the studies are ordered alphabetically. It should be noted that the P-value column includes the P-value reported in the articles for the difference in effect between the two interventions of interest. The table also includes the 95 percent CI about the net difference (or difference in final values, from crossover studies); however, in the large majority of cases, these numbers were estimated by the EPC based on reported standard deviations, standard errors, and P-values. This is noted in each table.

Grading a Body of Evidence for Each Key Question

We graded the strength of the body of evidence as per the AHRQ methods guide.36 Based on the division of outcomes within the Key Questions, we determined the strengths of evidence for the following three categories of outcomes: 1) BP (continuous and categorical outcomes); 2) other clinical, surrogate and intermediate outcomes, including quality of life and satisfaction; and 3) outcomes related to resource use. We further divided Key Question 1 into two sections: SMBP alone versus usual care; and SMBP and additional support versus usual care.

Risk of bias was defined as low, medium, or high based on the study design and methodological quality. We assessed the consistency of the data as either “no inconsistency” or “inconsistency present” (or not applicable if only one study). The direction, magnitude, and statistical significance of all studies were evaluated in assessing consistency, and logical explanations were provided in the presence of equivocal results.

We also assessed the relevance of evidence. Studies with limited relevance either included populations which related poorly to the general population of adults with hypertension or that contained substantial problems with the measurement of the outcomes of interest. (As will be shown in the Results section, we found no studies conducted in children.) We also assessed the precision of the evidence based on the degree of certainty surrounding an effect estimate. A precise estimate was considered an estimate that would allow a clinically useful conclusion. An imprecise estimate was one for which the CI is wide enough to preclude a conclusion.

We rated the strength of evidence for a particular comparison for each outcome category using one of the following four labels (as per the AHRQ methods guide): High, Moderate, Low, or Insufficient. Ratings were assigned based on our level of confidence that the evidence reflected the true effect for the major comparisons of interest. Ratings were defined as follows:

High. There is a high level of assurance that the findings of the literature are valid with respect to the relevant comparison. No important scientific disagreement exists across studies. At least two quality A studies are required for this rating.

Moderate. There is a moderate level of assurance that the findings of the literature are valid with respect to the relevant comparison. Little disagreement exists across studies. Moderately rated bodies of evidence contain fewer than two quality A studies or such studies lack long-term outcomes of relevant populations.

Low. There is a low level of assurance that the findings of the literature are valid with respect to the relevant comparison. Underlying studies may report conflicting results. Low rated bodies of evidence could contain either quality B or C studies.

Insufficient. Evidence is either unavailable or does not permit estimation of an effect due to lacking or sparse data. In general, when only one study has been published, the evidence was considered insufficient, unless the study was particularly large, robust, and of good quality.

Overall Summary Table

To aid discussion, we summarized all studies and findings into one table in the Summary and Discussion (and the Executive Summary). Separate cells were constructed for each key question and subquestion. The table also includes the strength of evidence to support each conclusion.

Peer Review

The initial draft report was pre-reviewed by the TOO and an AHRQ Associate Editor (a senior member of a sister EPC). Following revisions, the draft report was sent to invited peer reviewers and was simultaneously uploaded to the AHRQ Web site where it was available for public comment for 30 days. All reviewer comments (both invited and from the public) were collated and individually addressed. The revised report and the EPC's responses to invited and public reviewers' comments were again reviewed by the TOO and the Associate Editor prior to completion of the report. The authors of the report had final discretion as to how the report was revised based on the reviewer comments, with oversight by the TOO and the Associate Editor.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.6M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...