U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

McCormack L, Sheridan S, Lewis M, et al. Communication and Dissemination Strategies to Facilitate the Use of Health-Related Evidence. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Nov. (Evidence Reports/Technology Assessments, No. 213.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Communication and Dissemination Strategies to Facilitate the Use of Health-Related Evidence

Communication and Dissemination Strategies to Facilitate the Use of Health-Related Evidence.

Show details

Methods

The methods for this systematic review generally follow those of the “Methods Guide for Effectiveness and Comparative Effectiveness Reviews” for the Agency for Healthcare Research and Quality.65 In this section, we explicate our topic refinement process and explain our literature search strategies (e.g., inclusion and exclusion criteria, search and retrieval process). We also describe methods of abstracting relevant information from included articles and our approach to data synthesis. We also discuss our criteria for rating the quality of individual studies and for grading the strength of the bodies of evidence for the major comparisons and outcomes of interest.

Topic Refinement and Review Protocol

To define the scope of our review and make it maximally responsive to stakeholders, such as guideline developers and policymakers, we engaged in a public process of development and refinement of Key Questions (KQs) for the review. Initially, we engaged a panel of experts in health communication, guideline development and implementation, and risk communication to solicit input on some KQs that our research team proposed. Using expert input, we then refined the KQs, and AHRQ posted them on their website for public comment on March 5, 2012 for 4 weeks. We then drafted a protocol and recruited members of a technical expert panel to provide high-level content and methods expertise throughout the review process. Our key informants and technical experts included representatives from the following disciplines: communication sciences, social marketing, health behavior, epidemiology, dissemination and implementation sciences and medicine.

Literature Search Strategy

In the Introduction, we set out the KQs in detail; Figure 1 provided the analytic framework that guided much of our work. As described below, we needed three sets of searches to cover the three main topics: (1) techniques to communicate medical evidence and how their effect varies by patients and clinicians (KQ 1a and 1b); (2) strategies to disseminate medical evidence and how their effect varies by patients and clinicians (KQ 2a and 2b); and (3) different ways to explain uncertain evidence (KQ 3).

Search Strategy

We systematically searched, reviewed, and synthesized the scientific evidence for each KQ separately. Databases included MEDLINE®, the Cochrane Library, Cochrane Central Trials Registry, PsycINFO, and the Web of Science. We did not conduct additional searches for gray literature.

To identify articles relevant to each KQ, the EPC librarian began with three focused MEDLINE searches on the topics noted above. We used a variety of medical subject headings (MeSH terms) and major headings, free-text and title and abstract text-word searches (Table 6; Appendix B documents the exact search strings). Search results were limited to studies on humans published from January 1, 2000 to March 15, 2013 for communication and dissemination given the prior systematic reviews, and from January 1, 1966 to March15, 2013 for uncertainty given the lack of prior reviews on this specific topic.

Table 6. Initial literature search terms for each of the targeted searches.

Table 6

Initial literature search terms for each of the targeted searches.

Using analogous search terms, the librarian searched the Cochrane Library and Cochrane Central Trials Registry for trials on these topics. She searched PsycINFO for communication and uncertainty articles given the high likelihood of relevant publications in the psychological literature and the Web of Science to trace citations of known uncertainty frameworks and to capture articles on uncertainty.

To limit KQ 1 and KQ 2 searches to relevant comparative effectiveness literature, we further limited all searches to comparative effectiveness studies by including only studies that had any of the following keywords: comparative effectiveness, evidence based or evidence-based, and recommendation or recommendations. We did not further refine KQ 3 results given our broader approach to this literature

We expected some overlap in results across these searches. We removed duplications in our EndNote database and tracked the yield from each search.

We hand-searched bibliographies of included articles. In addition, in an effort to avoid retrieval bias, we manually searched the reference lists of landmark studies and background articles on this topic to look for any relevant citations that electronic searches might have missed.

Inclusion and Exclusion Criteria: Overall

Criteria for inclusion and exclusion of studies address both the PICOTS model (population, interventions, comparators, outcomes, timeframes, and settings; see Introduction) and other important study design and publication issues. Table 6 presents the inclusion/exclusion criteria common to all three KQs; Table 7 defines the inclusion criteria applied to admissible research evidence for KQ 1 and KQ 2. We present other KQ-specific inclusion/exclusion criteria in Tables 8, 9, and 10, respectively.

Table 7. General inclusion/exclusion criteria for all Key Questions.

Table 7

General inclusion/exclusion criteria for all Key Questions.

Table 8. Inclusion and exclusion criteria for research evidence to be communicated or disseminated.

Table 8

Inclusion and exclusion criteria for research evidence to be communicated or disseminated.

Table 9. Included communication strategies and approaches for Key Question 1.

Table 9

Included communication strategies and approaches for Key Question 1.

Table 10. Included dissemination strategies for Key Question 2.

Table 10

Included dissemination strategies for Key Question 2.

A few specific decisions we made regarding inclusion and exclusion criteria bear special mention. First, to improve the overall quality of included findings, we focused on randomized trials with at least 100 total individuals in the study (e.g., 50 individuals per arm in a study with two arms) to prioritize studies with greater statistical power and less chance for confounding. Second, we limited our review to interventions that communicate and disseminate information to clinicians (a category that included physicians, nurses, midlevel providers, and pharmacists) and patients. Third, we limited included studies performed in the numerous countries specified in a recent analysis of the world systems of nations that are likely relevant for our target audiences.66

Finally, for communication and dissemination (KQ 1 and KQ 2), we searched only from 2000 to the present for two main reasons: (1) comparative effectiveness work became more common after 2000 and (2) multiple systematic reviews on communication and dissemination appeared after 2000, thus assuring that we could capture relevant older literature through those publications.

For all Key Questions, we considered how to define the evidence base for the interventions we studied. In the end, because our review was designed to assist evidence developers, we decided that interventions for KQ 1 and KQ 2 must be based on evidence that had been systematically assembled, reviewed, presented, or used to make recommendations about clinical practice. Table 7 documents these criteria. By applying these criteria, we excluded studies communicating or disseminating evidence developed or assembled through a consensus process or created by individual researchers during a single study of any design. This allowed us to define clearly a set of studies that were attempting to communicate or disseminate evidence to end users. Further, it acknowledged the likely differences in the impact of interventions designed using evidence from established guideline developers versus other single studies or composites of studies. For KQ 3, we made no such limitations given the overall paucity of evidence. Thus, we included interventions based on evidence of any type (e.g., systematic reviews, consensus guidelines, RCTS, cohorts, quasi-experimental studies).

Inclusion and Exclusion Criteria Specific to Communication Techniques

For KQ 1, strategies of interest include tailored communication, communication targeted at audience segments; use of narratives; and message framing (Table 8). These strategies are designed to make information clearer, easier to understand, and more relevant to end-users. We included studies that compared two or more of the included communication strategies head to head (e.g., tailoring versus targeting).

We included all studies that used a multicomponent approach that had a combination of two or more communication strategies (e.g., tailoring and targeting) compared with a single strategy. Multicomponent approaches seek to increase the overall impact of the information across geographic and practice settings and target audiences; they also aim to raise recipients’ understanding of the information.

We excluded studies that compared one of these communication strategies with “usual practice” (i.e., steps that are essentially standard procedures and do not represent any included strategies that serve as interventions of interest). Prior reviews11,13 have previously examined communication techniques against only usual practice. We also excluded studies that compared permutations of the included communication strategies, for example, comparison of two different ways of using narratives. We excluded studies that examined interpersonal communication techniques given that our focus was on examining the comparative effectiveness of techniques that evidence developers might use in developing evidence summaries for end-users. Finally, given the volume of other research (e.g., from the Cochrane Collaboration) focusing on decision aids, we included studies of decision aids only when they were based on evidence-based guidelines and met the other inclusion criteria above. To be included, studies must have used a decision aid as a communication strategy or dissemination technique.

Inclusion and Exclusion Criteria Specific to Dissemination Strategies

For KQ 2, we focused on active dissemination strategies that involve efforts to spread evidence-based information via specific strategies and channels. We included active dissemination strategies that are designed to do one or more of the following (see Table 10): (1) increase the reach of information (e.g., postal and electronic mail; electronic/digital, social, and mass media); (2) increase people’s motivation to use and apply evidence (e.g., using champions, opinion/thought leaders, peer and social networks); and (3) increase people’s ability to use and apply evidence (e.g., by packaging information so that the factors likely to affect adoption are easy to find or provided “how to” information that bridged the adoption to implementation divide by providing additional resources or information; or by skills-building efforts).

We included head-to-head comparisons between these broad strategies (e.g., increasing reach vs. increasing motivation), within comparisons of different strategies with the same broad aims (e.g., increasing reach using social media vs. increasing reach using digital media). Multicomponent strategies with several dissemination strategies in concurrent combination or in sequence to increase the reach of evidence, to enhance end users’ motivation to adopt evidence, or to enhance their ability to apply the evidence were included. We relied both on investigators’ statements of their primary comparison and on our judgments about the key differences between study arms to classify the primary comparison between study arms. Often, in addition to the stated study comparisons, other factors differed between study arms. For instance, in a study comparing the effects reach versus ability (i.e., skill training) for evidence on cardiovascular nutrition delivery might be alternately provided by trained research staff versus a trained nutritionist or disseminated via mail or the Internet We noted these differences, but did not control for them in anyway given that we performed a narrative synthesis. Many times the delivery method was confounded with the strategy and there was no way to disentangle the effect. To address this issue in a first stage of analysis we organized the evidence and summary tables focusing on delivery approach. There were no differences in the results in organizing the studies in this way, so we ultimately decided to present the results as shown in the Results chapter. This organization was more consistent with our original intent.

We excluded studies that compared the above strategies to “usual practice.” In this case, this means passive, uncontrolled spread of information of evidence or no direct effort to spread information such as posting information to an evidence developer’s website or posting scientific publications in a searchable database. The basic rationale is that passive dissemination strategies are generally not effective.32 We also excluded studies that compared enhanced versions of the same strategy (e.g., monthly telephone calls vs. weekly telephone calls).

When investigators did not describe what a control group involved, other than to describe it as “usual practice,” we excluded the study. In some instances, the authors may have said that the control strategy was “usual practice;” upon examination, however, we reclassified it as an active comparator. For example, a study might have described mailing a guideline as usual practice (or usual care), but for the purposes of this study, we considered that step to be the active strategy of “improving reach.”

We excluded studies in which the primary purpose of the intervention was implementation (see definition in the Introduction), even when the intervention seemingly raised awareness or educated patients or clinicians (such as reminders at the point of care or audit-and-feedback). An example of implementation is when a clinical practice adopts and tries out a new treatment approach that is based on newly available health or health care evidence. Thus, if investigators were exploring how clinicians put a communication or dissemination approach into practice and were evaluating what impact that on their patients and patients’ outcomes, then we considered that study (or that part of the study) to be implementation and either did not include the study (or omitted the findings for the implementation portion of the study).

Inclusion and Exclusion Criteria for Studies To Present Uncertain Evidence

Health-related and health care evidence inherently involves some degree of uncertainty. We focused this review on uncertainty in a body of evidence and how to communicate this uncertainty effectively to target audiences in ways that allow informed decisions. Given the early state of the literature on communicating uncertainty about evidence, our search for such studies was intentionally broad (i.e., inclusive) within the overall inclusion and exclusion criteria outlined above.

As defined in Table 3 in the Introduction, we examined studies that compared ways to explain seven types of uncertainty. Five come from the EPC program approach to grading strength of evidence: the overall grade for strength of evidence and the four principal domains used in deriving that grade—risk of bias, consistency, precision, and directness. We also considered studies that compared ways to explain net benefit (i.e., the balance of benefits and harms at a population level) of preventive and therapeutic services. Rather than limit conceptualizations of net benefit, we included several broad categories of studies, including those that acknowledged 1) alternate wording schemes for the same net benefit, 2) the effect of presenting different harms and benefits for the same services (allowing the evidence user to interpret net benefit), and 3) the effects of framing the net benefit information in the context of other more beneficial services. Finally, we looked at the issues of applicability (i.e., generalizability or what is sometimes termed external validity) and overall strength of recommendations delivered by policymakers.

Because our interest was in communicating uncertainty, we included any communication strategy that investigators used to communicate uncertainty. These could include non-numeric, numeric, or visual presentations of uncertainty or presentations using any of the communication techniques included for KQ 1 as shown in Table 11 below.

Table 11. Included communication strategies and approaches for Key Question 3.

Table 11

Included communication strategies and approaches for Key Question 3.

Unlike KQ 1 and KQ 2, we did not require that studies included for KQ 3 communicate uncertainty related to systematic reviews or guideline evidence. Instead, because of the overall paucity of evidence, we included studies communicating uncertainty about any type of evidence (e.g., RCTs, cohort studies, quasi-experimental studies, unspecified evidence source) in either real world settings or hypothetical examples.

The following topics, although important, were beyond the scope of this review. We did not examine interventions designed to help individuals cope with uncertainty. We also excluded studies that compared alternative presentations of point estimates, as previous reviews on risk communication have well summarized these studies.4451 Finally, because our focus was on alternate ways of communicating uncertainty related to the quality, net benefit, and generalizability of well-synthesized medical evidence, we excluded studies that addressed uncertainty related to: multiple causes of illness, changes in risks over time, lack of an individual’s knowledge about evidence that is available, unclear values, tradeoffs in care prompted by limited-resource settings, concerns about clinicians’ competence, concerns about how a medical illness will affect family and friends, imperfect diagnostic testing, and uncertain prognosis.

Study Selection

Two trained members of the research team independently reviewed all titles and abstracts identified through searches for eligibility in terms of the overall or KQ-specific inclusion/exclusion criteria. Studies marked for possible inclusion by either reviewer underwent a full-text review. For studies without adequate information to determine inclusion or exclusion, we retrieved the full text and then made the determination. We tracked all results in an Excel database.

We retrieved and reviewed the full text of all articles included during the title/abstract review phase. Again, two trained members of the research team independently reviewed each full-text article for inclusion or exclusion on the basis of the eligibility criteria described earlier. If both reviewers agreed that a study did not meet the eligibility criteria, we excluded it. If the reviewers disagreed, conflicts were resolved by discussion and consensus or by consulting a third, senior member of the review team. All results were tracked in an EndNote database. Appendix B lists all studies excluded at this stage and the main reasons for exclusion. The disposition of all items (starting with the initial yields of the searches) through to articles finally retained for synthesis, are reported in a flow diagram conforming to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards (see Figure 2 in the Results for KQ 1 section).67 We accounted for studies reported in multiple articles.

Data Extraction

For studies that met inclusion criteria, trained reviewers extracted relevant information into specifically designed abstraction spreadsheets to facilitate the capture of all pertinent information from each article, including study design, characteristics of study populations, interventions, comparators, outcomes, settings, and results (Table 12). A second member of the team reviewed all abstractions against the accompanying article(s) for completeness and accuracy. Final approved abstraction spreadsheets were compiled and presented as evidence tables, which can be found in Appendix D, E, and F. These evidence tables formed the basis for the summary tables presented in the results sections to supplement text about synthesis of the evidence.

Table 12. Data items extracted.

Table 12

Data items extracted.

We relied on the analysis and comparisons provided by the authors. However, the review team had to calculate differences between groups (e.g., in mean values on a scale or percentages). We did this by subtracting the value for the intervention arm thought to have, or originally hypothesized to have, greater effect from the one expected to have a weaker effect. Because numerous findings led to negative differences (because of the original choices about the directionality of comparison), for KQ 1 the table indicates whether the difference was negative or positive and notes which group the findings favored. By favored, we mean which study group had the better result, namely a positive health behavior. For KQ 2 articles, the study authors sometimes did not make any predictions in terms of which arm should have the greater effect, therefore we report the absolute difference that emerged. These detailed findings are shown in Appendix E.

Risk of Bias Assessment of Individual Studies

To assess the risk of bias of individual studies, we used criteria described in the AHRQ “Methods Guide for Effectiveness and Comparative Effectiveness Reviews.”65 We used questions adapted from the RTI Item Bank,68 the Cochrane Risk of Bias tool,69 and prior work by the U.S. Preventive Services Task Force.61 We assessed the potential for selection bias (including attrition bias), measurement bias (such as performance bias, detection bias), confounding, and power. We also assessed potential biases in reporting.

We qualitatively synthesized the results to determine a rating of low, medium, or high risk of bias. In general, a study with a low risk of bias had a strong design, measured outcomes appropriately, used appropriate statistical and analytical methods, reported low attrition and little or no differential attrition, and reported methods and outcomes completely. Studies with a medium risk of bias were those with some bias, but not enough to invalidate results, and did not meet all criteria required for low risk of bias. These studies may have had some flaws in design or execution (e.g., imbalanced recruitment, high attrition) but they provided information (say, through sensitivity analysis) that enabled the reader to determine whether those flaws were not likely to cause major bias. Missing information often led to ratings of medium rather than low risk of bias. Studies with a high risk of bias were those with at least one major flaw that was likely to cause significant bias and thus might have invalidated the results. Major flaws preclude the ability to draw causal inferences between the intervention and the outcome.

Two reviewers independently assessed the risk of bias for each study (see Appendix C for the final criteria we used and results). They resolved disagreements by discussion and consensus or by consulting a third, senior member of the team.

Data Synthesis: Overall

Studies included in our review compared a wide range of interventions and plethora of outcomes; they were sufficiently heterogeneous to preclude meta-analysis. Thus, we synthesized the data qualitatively by KQ. We paid particular attention to moderators of study effects as a way to explain any seemingly disparate findings. Possible moderators of interest for all Key Questions included risk of bias, study size, and target audience. We did not retain studies of high risk of bias for analysis, presentation in the results sections or strength of evidence grading.

Data Synthesis: Methods Specific to Key Questions

As noted in the introduction, we organized our report into three separate results sections specific to a KQ: communication, dissemination, or uncertainty. Within each section, we organized our results by the types of intervention strategies compared and then by outcomes, if possible.

For each subset of studies, we summarized key findings, including results in the experimental or quasi-experimental and the comparator groups and absolute differences between groups. If investigators did not report absolute differences between groups, we recorded the effect size that authors had reported and calculated an absolute difference. This approach gave us the best clinical interpretation of data.

Other than the overall moderators of interest noted earlier, we also looked at moderators specific to the KQ. These included:

  • Communication techniques

    Health literacy/numeracy level of audience

    Intervention intensity or complexity (or both)

    Message delivery setting

    Message source

  • Dissemination strategies

    Care delivery setting

    Type of media, mode, or channel (e.g., intervention format and delivery agent)

  • Techniques for communicating uncertainty

    Health literacy or numeracy of audience

    Format of presentation (graphic, numeric, non-numeric, combination).

Strength of the Body of Evidence

We graded the strength of evidence on the basis of guidance established for the EPC program.65,70 The EPC approach incorporates four required domains: risk of bias (including study design and aggregate quality), consistency, directness, and precision of the evidence. Table 13 defines the four overall grades for bodies of evidence that can be assigned. Grades reflect the confidence that we have in the ability of the evidence to answer the KQs on the comparative effectiveness of the interventions in this review.

Table 13. Definitions of the grades of overall strength of evidence.

Table 13

Definitions of the grades of overall strength of evidence.

Two reviewers independently rated the four domains for each intervention for each key outcome (listed in the analytic framework depicted in Figure 1); conflicts were resolved by group consensus. Two reviewers also independently derived the overall strength of evidence grade (resolving conflicts in the same way).

We adopted some conventions for assigning overall grades. First, when no studies were available or studies provided conflicting results, we graded evidence as insufficient. Second, when we had a single study, we graded evidence as low given that it was impossible to assess the consistency of evidence across settings and results would very likely change with additional testing.

For judging precision, we judged it as precise if: (1) confidence intervals were available, were reasonably narrow and did not cross minimal clinically important differences or the null; or if (2) confidence interval were not available, but the sample size was 400, which is a relatively conservative number. We judged it as imprecise if: (1) the confidence intervals crossed minimal clinically important differences or the null, or if (2) the sample size was less than 400. We also considered statistical significance.

To judge the strength of the evidence based on a single study, we applied the following criteria: (1) if imprecise and the risk of bias was moderate, we determined that the SOE was insufficient; (2) if precise and the risk of bias was moderate, we determined that the SOE was low; (3) if precise and risk of bias was low, we determined that the SOE was low (but discussed the issue as a team if the study was extremely large and across multiple sites, allowing consistency to be determined).

We present a summary of the strength of evidence for each intervention in the results section. Detailed strength of evidence tables can be found in Appendixes D, E, and F.

Applicability

We examined the applicability of the body of evidence for specific KQs by looking at characteristics that may limit applicability based on the PICOTS structure.65,71 Such conditions may be associated with heterogeneity of treatment effect and the ability to generalize the effectiveness of an intervention to use in everyday practice. Examples of issues that may limit applicability include the following:

  • Population: narrow eligibility criteria,
  • Outcomes: different preventive behaviors or clinical conditions
  • Settings: restrictions to certain types of health care or other institutions when the communication or dissemination activities might be carried out in many different locales or venues, and
  • Timing: studies of different durations or points of followup that may have various implications for applicability.

Peer Review and Public Commentary

Experts in the field and individuals representing stakeholder and user communities were invited to provide external peer review of this systematic review. They were charged with commenting on the content, structure, and format of the evidence report, providing additional relevant citations, and pointing out issues related to how we conceptualized the topic and analyzed the evidence. Our Peer Reviewers (listed in the front matter) gave us permission to acknowledge their review of the draft. AHRQ staff and an associate editor also provided comments. In addition, the Scientific Resource Center posted the draft report on the AHRQ Web site (effectivehealthcare.ahrq.gov) for 4 weeks to elicit public comment. We addressed all reviewer comments, revising the text as appropriate, and documented everything in a “disposition of comments report” that will be made available 3 months after the Agency posts the final systematic review on the AHRQ Web site.

Image introductionf1
Image results_kq1f1

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.2M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...