NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines; Graham R, Mancher M, Miller Wolman D, et al., editors. Clinical Practice Guidelines We Can Trust. Washington (DC): National Academies Press (US); 2011.

Cover of Clinical Practice Guidelines We Can Trust

Clinical Practice Guidelines We Can Trust.

Show details

4Current Best Practices and Proposed Standards for Development of Trustworthy CPGs: Part 1, Getting Started

Abstract: As stated in Chapter 1, the committee was charged with identifying standards for the production of unbiased, scientifically valid, and trustworthy clinical practice guidelines. The following two chapters describe and present the rationale for the committee’s proposed standards, which reflect a review of the literature, public comment, and expert consensus on best practices for developing trustworthy guidelines. The standards and supporting text herein address several aspects of guideline development, including transparency, conflict of interest, guideline development team composition and group process, and finally, the determination of guideline scope and the chain of logic, including interaction with the systematic review team.


Chapters 4 and 5 detail aspects of the clinical practice guideline (CPG) development process, and the committee’s related proposed standards, over time, from considerations of transparency and conflict of interest (COI) to updating of guidelines. The proposed standards arose from the committee’s compliance with standard-setting methodologies elaborated on in Chapter 1. A standard is defined as a process, action, or procedure that is deemed essential to producing scientifically valid, transparent, and reproducible results. The committee expects its standards to be pilot-tested and evaluated for reliability and validity (including applicability), as described in detail in Chapter 7, and to evolve as the science and experience demand.

This chapter captures aspects of the beginnings of guideline development, including transparency, conflict of interest, guideline development team composition and group process, and determining guideline scope and logic, including interaction with the systematic review (SR) team. The committee hopes its proposed standards serve as an important contribution to advancing the work of numerous researchers, developers, and users of guidelines, and help to clarify where evidence and expert consensus support best practices and where there is still much to learn. An important note is that, although textually discussed, no standards are proposed for certain aspects of the guideline development process, such as determining group processes, guideline scope, chain of logic underlying a guideline, incorporating patients with comorbidities and the impact of cost on rating the strength of recommendations, given that the committee could not conceive any standards applicable to all guideline development groups (GDGs) in these areas at this time.


“Transparency” connotes the provision of information to CPG users that enables them to understand how recommendations were derived and who developed them. Increasing transparency of the guideline development process has long been recommended by authors of CPG development appraisal tools (AGREE, 2001; IOM, 1992; Shaneyfelt et al., 1999) and the following leading guideline development organizations: the U.S. Preventive Services Task Force (USPSTF), National Institute for Health and Clinical Excellence (NICE), American College of Cardiology Foundation/American Heart Association (ACCF/AHA), and American Thoracic Society. However, exactly what needs to be transparent and how transparency should be accomplished has been unclear. The desire to have public access to GDG deliberations and documents must be balanced with resource and time constraints as well as the need for GDG members to engage in frank discussion.

The committee found no comparisons in the literature of GDG approaches to achieving transparency, but did inspect policies of select organizations. The American Academy of Pediatrics transparency policy calls on guideline authors to make an explicit judgment regarding anticipated benefits, harms, risks, and costs (American Academy of Pediatrics, 2008).1 According to Schünemann and coauthors (2007, p. 0791) in an article concerning transparent development of World Health Organization (WHO) guidelines, “Guideline developers are increasingly using the GRADE (Grading Recommendations Assessment, Development and Evaluation) approach because it includes transparent judgments about each of the key factors that determine the quality of evidence for each important outcome, and overall across outcomes for each recommendation.”

Even clinical decisions informed by high-quality, evidence-based CPG recommendations are subject to uncertainty. An explicit statement of how evidence, expertise, and values were weighed by guideline writers helps users to determine the level of confidence they should have in any individual recommendation. Insufficient or conflicting evidence, inability to achieve consensus among guideline authors, legal and/or economic considerations, and ethical/religious issues are likely reasons that guideline writers leave recommendations vague (American Academy of Pediatrics, 2008). Instead, guideline developers should highlight which of these factors precluded them from being more specific or directive. When a guideline is written with full disclosure, users will be made aware of the potential for change when new evidence becomes available, and will be more likely to understand and accept future alterations to recommendations (American Academy of Pediatrics, 2008). Detailed attention to CPG development methods for appraising, and elucidating the appraisal of, evidentiary foundations of recommendations is provided in Chapter 5.

Transparency also requires statements regarding the development team members’ clinical experience, and potential COIs, as well as the guideline’s funding source(s) (ACCF and AHA, 2008, AHRQ, 2008; Rosenfeld and Shiffman, 2009). Disclosing potential financial and intellectual conflicts of interest of all members of the development team allows users to interpret recommendations in light of the COIs (American Academy of Pediatrics, 2008). The following section in this chapter discusses in greater detail how to manage COIs among development team members. Ultimately, a transparent guideline should give users confidence that guidelines are based on best available evidence, largely free from bias, clear about the purpose of recommendations to individual patients, and therefore trustworthy.

1.Establishing Transparency
1.1The processes by which a CPG is developed and funded should be detailed explicitly and publicly accessible.


The Institute of Medicine’s 2009 report on Conflict of Interest in Medical Research, Education, and Practice defined COI as “A set of circumstances that creates a risk that professional judgment or actions regarding a primary interest will be unduly influenced by a secondary interest” (IOM, 2009, p. 46). A recent comprehensive review of COI policies of guideline development organizations yielded the following complementary descriptions of COI: “A divergence between an individual’s private interests and his or her professional obligations such that an independent observer might reasonably question whether the individual’s professional actions or decisions are motivated by personal gain, such as financial, academic advancement, clinical revenue streams, or community standing” and “A financial or intellectual relationship that may impact an individual’s ability to approach a scientific question with an open mind” (Schünemann et al., 2009, p. 565). Finally, intellectual COIs specific to CPGs are defined as “academic activities that create the potential for an attachment to a specific point of view that could unduly affect an individual’s judgment about a specific recommendation” (Guyatt et al., 2010, p. 739). Increasingly, CPG developers—including the American Heart Association, American Thoracic Society, American College of Chest Physicians, American College of Physicians, and World Health Organization—all have COI policies encompassing financial and intellectual conflicts (Guyatt et al., 2010; Schünemann et al., 2009).

The concept that COI can influence healthcare decision makers is widely recognized (Als-Nielsen, 2003; Lexchin et al., 2003). Therefore, it is disturbing that an assessment of 431 guidelines authored by specialty societies reported that 67 percent neglected to disclose information on the professionals serving on the guideline development panel, making even rudimentary evaluation of COI infeasible (Grilli et al., 2000). Furthermore, an investigation of more than 200 clinical practice guidelines within the National Guideline Clearinghouse determined that greater than half included no information about financial sponsors of guidelines or financial conflicts of interest of guideline authors (Taylor, 2005). Organizations developing practice guidelines thus need to improve management and reporting of COI (Boyd and Bero, 2000; Campbell, 2007; Jacobs et al., 2004).

Disclosure policies should relate to all potential committee members (including public/patient representatives) and should include all current and planned financial and institutional conflicts of interest. Financial (commercial or noncommercial) COI typically stems from actual or potential direct financial benefit related to topics discussed or products recommended in guidelines. Direct financial commercial activities include clinical services from which a committee member derives a substantial proportion of his or her income; consulting; board membership for which compensation of any type is received; serving as a paid expert witness; industry-sponsored research; awarded or pending patents; royalties; stock ownership or options; and other personal and family member financial interests. Examples of noncommercial financial activities include research grants and other types of support from governments, foundations, or other nonprofit organizations (Schünemann et al., 2009). A person whose work or professional group fundamentally is jeopardized, or enhanced, by a guideline recommendation is said to have intellectual COI. Intellectual COI includes authoring a publication or acting as an investigator on a peer-reviewed grant directly related to recommendations under consideration. Finally, individuals with knowledge of relationships between their institutions and commercial entities with interests in the CPG topic are considered to have institutional COI. These include public/patient representatives from advocacy organizations receiving direct industry funding.

Biases resulting from COI may be conscious or unconscious (Dana, 2003) and may influence choices made throughout the guideline development process, including conceptualization of the question, choice of treatment comparisons, interpretation of the evidence, and, in particular, drafting of recommendations (Guyatt et al., 2010). A recent study of Food and Drug Administration Advisory Committees found that members regularly disclose financial interests of considerable monetary value, yet rarely recuse themselves from decision making. When they did, less favorable voting outcomes regarding the drug in question were observed across the majority of committee meetings (Lurie et al., 2006). A related investigation observed that 7 percent of guideline developers surveyed believed their relationships with industry affected their guideline recommendations; moreover, nearly 20 percent believed that guideline coauthors’ recommendations were subject to industry influence (Chaudhry et al., 2002). Regardless of the nature of COI or its effects on guideline development, perception of bias undermines guideline users’ confidence in guideline trustworthiness as well as public trust in science (Friedman, 2002).

Direct guideline funding by for-profit organizations also poses COI challenges. The development, maintenance, and revision of CPGs is a costly, labor-intensive endeavor (American Academy of Physicians Assistants, 1997). Many professional societies and other groups developing guidelines rely, at least in part, on commercial sponsors to cover costs. The perception that a for-profit commercial entity, including pharmaceutical and medical device companies in particular, had influenced conclusions and recommendations of a CPG committee could undermine the trustworthiness of the GDG and its CPG (Eichacker et al., 2006; Rothman et al., 2009). Although the 2009 IOM Committee on COI in Medical Research, Education, and Practice found no systematic studies investigating the association between the guideline development process or CPG content and funding source, it did detail cases that raised concern about industry funding influence (IOM, 2009).

The controversy over Eli Lilly’s involvement with practice guidelines for treatment of severe sepsis, and the company’s marketing campaign for the drug rhAPC, highlight this issue. Although Eli Lilly and the sepsis guideline development group maintain that recommendations were based on high-quality randomized controlled trials (RCTs), many experts contend the group undervalued non-RCT studies of standard therapies and failed to address concerns about rhAPC’s adverse side effects. Because Eli Lilly was the predominant funder and many development panel members had relationships with the company, trust in integrity of the guideline recommendations was understandably low (Eichacker et al., 2006).

Some guideline experts have requested that professional medical organizations reject all industry funding for practice guidelines (Rothman et al., 2009) and hold GDG members to the most stringent COI standards (Sniderman and Furberg, 2009). The IOM’s 2009 report on conflict of interest suggests that adequate firewalls between funders and those who develop guidelines must exist (IOM, 2009).

However, the most knowledgeable individuals regarding the subject matter addressed by a CPG are frequently conflicted. These “experts” often possess unique insight into guideline-relevant content domains. More specifically, through their research or clinical involvement, they may be aware of relevant information about study design and conduct that is not easily identified. Although expert opinion is not a form of high-quality evidence, the observations of experts may provide valuable insight on a topic; those who have such insight may simply be without substitutes. Optimally GDGs are made up of members who lack COIs. Experts who have unique knowledge about the topic under consideration—but who have COIs—can share their expertise with the GDG as consultants and as reviewers of GDG products, but generally should not serve as members of the GDG.

Strategies for Managing COI

Strategies for managing potential COI range from exclusion of conflicted members from direct panel participation or restriction of roles, to formal or informal consultation, to participation in certain exclusive recommendations, to simple disclosure of COI. Although the 2009 IOM committee on COI found no systematic review of guideline development organizations’ conflict-of-interest policies, the committee did identify variations in the COI policies of select organizations. Specifically, COI policies vary with regard to the specific types of information that must be disclosed, who is responsible for managing conflicts and monitoring policy compliance, and whether COI procedures are transparent. Provisions for public disclosure of COI and managing relationships with funders also differ (IOM, 2009).

Although disclosure of guideline development members’ financial conflicts has become common practice, many experts are skeptical that disclosure alone minimizes the impact of conflicts (Guyatt et al., 2010). Hence, increasingly rigorous management strategies have been adopted by some organizations (Schünemann et al., 2009). These have included omission of those with COI from guideline development panels (WHO, 2008) and exclusion of conflicted persons from leadership positions (NICE, 2008). The USPSTF currently bars individuals who have earned more than $10,000 per year from medical expert testimony or related endeavors from serving on guideline panels. Lesser financial or intellectual conflicts may require disclosure to other panel members or recusal from specific recommendation deliberations, at the discretion of the USPSTF chair and vice chair and under the aegis of Agency for Healthcare Research and Quality staff (AHRQ, 2008). The ACCF/AHA task force strives to balance conflict of interest, rather than remove it completely, and allows 50 percent of committee members to have industry relationships, but recuses those members from voting on relevant recommendations. The committee chair must also be free of any COI (ACCF and AHA, 2008).

Other COI management approaches—including mandating clearer separation of unconflicted methodologists from the influence of potentially conflicted clinical experts—are reflected in the American College of Chest Physicians Antithrombotic Guidelines (Guyatt et al., 2010). In this approach, unconflicted methodologists, such as epidemiologists, statisticians, healthcare researchers and/or “guidelineologists” (i.e., those with specific expertise in the guideline development process), lead the formulation of recommendations in collaboration with clinical experts who may be conflicted to a degree that would not preclude them from panel participation. Guyatt and coauthors advocate this strategy, stating that the key to developing unconflicted recommendations is that the responsibility for the final presentation of evidence summaries and rating of quality of evidence rests with unconflicted panel members, and in particular with the methodologist chapter editor (Guyatt et al., 2010). A 2010 examination of state-of-the-art COI management schemata for CPGs, performed by Shekelle et al. (2010), provides detailed insight for developers, as described below.

Preliminary Review and Management of COI

In selecting prospective participants for guideline development, disclosures typically are reviewed prior to the first meeting, and unresolvable conflicts of interest are investigated. The procedures (including step-by-step review and management) are described clearly as part of CPG development policy. Prospective members agree to divest any stocks or stock options whose value could be influenced by the CPG recommendations, and refrain from participating in any marketing activities or advisory boards of commercial entities related to the CPG topic.

Disclosure of COI to Other Panel Members

Once members of a guideline panel have been assembled, any member COI is disclosed and discussed before deliberations begin. Individual participants (including project chairs and panelists) label how COI might affect specific recommendations. Disclosures and conflicts should be reviewed in an ongoing manner by those managing COI.

2.Management of Conflict of Interest (COI)
2.1Prior to selection of the guideline development group (GDG), individuals being considered for membership should declare all interests and activities potentially resulting in COI with development group activity, by written disclosure to those convening the GDG:
Disclosure should reflect all current and planned com
mercial (including services from which a clinician derives a substantial proportion of income), noncommercial, intellectual, institutional, and patient–public activities pertinent to the potential scope of the CPG.
2.2Disclosure of COIs within GDG:
All COI of each GDG member should be reported and discussed by the prospective development group prior to the onset of his or her work.
Each panel member should explain how his or her COI could influence the CPG development process or specific recommendations.
Members of the GDG should divest themselves of financial investments they or their family members have in, and not participate in marketing activities or advisory boards of, entities whose interests could be affected by CPG recommendations.
Whenever possible GDG members should not have COI.
In some circumstances, a GDG may not be able to perform its work without members who have COIs, such as relevant clinical specialists who receive a substantial portion of their incomes from services pertinent to the CPG.
Members with COIs should represent not more than a minority of the GDG.
The chair or cochairs should not be a person(s) with COI.
Funders should have no role in CPG development.


Guideline development involves technical processes (SRs of relevant evidence), judgmental processes (interpretation of SR and derivation of recommendations), and interpersonal processes (consensus building). The validity of guideline recommendations may be influenced adversely if any one of these processes is biased. There has been much less methodological focus given to studying and optimizing judgmental and interpersonal processes, than on ensuring validity of the technical process. (Gardner et al., 2009; Moreira, 2005; Moreira et al., 2006; Pagliari and Grimshaw, 2002; Pagliari et al., 2001). Fundamentally, the quality of the latter processes depends on composition of the group (whether the right participants have been brought to the table) and group process (whether the process allows all participants to be involved in constructive discourse surrounding implications of the systematic review).

Group Composition

Although the composition across prominent GDGs may vary, most commonly GDGs consist of 10 to 20 members reflecting 3 to 5 relevant disciplines (Burgers et al., 2003b). Clinical disciplines typically represented include both generalists and subspecialists involved in CPG-related care processes. Nonclinical disciplines typically represented include those of methodological orientation, such as epidemiologists, statisticians, “guidelineologists” (i.e., those with specific expertise in the guideline development process), and experts in areas such as decision analysis, informatics, implementation, and clinical or social psychology. It is important that the chair have leadership experience. Public representatives participate in a number of guideline development efforts and may include current and former patients, caregivers not employed as health professionals, advocates from patient/consumer organizations, and consumers without prior direct experience with the topic (Burgers et al., 2003b).

Empirical evidence consistently demonstrates that group composition influences recommendations. In a systematic review of factors affecting judgments achieved by formal consensus development methods, Hutchings and colleague identified 22 studies examining the impact of individual participant specialty or profession. Overall, the authors observed that those who performed a procedure, versus those who did not, were more likely to rate more indications as appropriate for that procedure. In addition, in five individual studies comparing recommendations made by unidisciplinary and multidisciplinary groups, recommendations by multidisciplinary groups generally were more conservative (Hutchings and Raine, 2006). Murphy and colleagues (1998) offer other relevant findings in a systematic review in which they compared guideline recommendations produced by groups of varying composition. The authors concluded that differences in group composition may lead to contrasting recommendations; more specifically, members of a clinical specialty are more likely to promote interventions in which their specialty plays a part. Overall, the authors state: “The weight of the evidence suggests that heterogeneity in a decision-making group can lead to a better performance [e.g., clarity and creativity in strategic decision making due to fewer assumptions about shared values] than homogeneity” (Murphy et al., 1998, p. 33).

Fretheim and colleagues’ (2006a) analysis of six studies of CPGs, excluded from Murphy’s review, demonstrated that clinical experts have a lower threshold for recommending procedures they perform. Complementary findings provided by Shekelle et al. (1999) discovered that given identical evidence, a single subspecialty group will arrive at contrasting conclusions compared to those of a multidisciplinary group. Finally, an investigation of six surgical procedures by Kahan and colleagues (1996) suggests that 10 to 42 percent of cases considered appropriate for surgery by specialists who performed the procedure were considered inappropriate by primary care providers.

Lomas (1993) explains and offers implications of these findings as follows: first, limited evidentiary foundations for guideline development require supplementation by a variety of stakeholders; second, value conflicts demand resolution; and third, successful introduction of a guideline requires that all key disciplines contribute to development to ensure “ownership” and support. In complementary fashion, the IOM Committee to Advise the Public Health Service on Clinical Practice Guidelines in 1990 offered the following rationale in support of multidisciplinary guideline development groups: (1) they increase the likelihood that all relevant scientific evidence will be identified and critically assessed; (2) they increase the likelihood that practical problems in guideline application will be identified and addressed; and (3) they increase a sense of involvement or “ownership” among audiences of the varying guidelines (IOM, 1990).

Given these empirical and theoretical arguments, there is broad international consensus that GDGs should be multidisciplinary, with representation from all key stakeholders (ACCF and AHA, 2008; AGREE, 2003; NICE, 2009; SIGN, 2008). Rosenfeld and Shiffman (2009, p. S8) capture this sentiment in the following words: “every discipline or organization that would care about implementation [of the guideline] has a voice at the table.” This carries practical implications when convening a guideline development panel in terms of panel size, disciplinary balance, and resource support. Small groups may lack a sufficient range of experience. In their 1999 conceptualization of the CPG development process, Shekelle and colleagues (1999) assert that guideline reliability may increase in a multidisciplinary (and hence larger) group due to increased balancing of biases. More than 12 to 15 participants may result in ineffective functioning (Rosenfeld and Shiffman, 2009). Murphy and coauthors’ systematic review asserts that “having more group members will increase the reliability of group judgment,” but “large groups may cause coordination problems” (Murphy et al., 1998, p. 37). Furthermore, “It is likely that below about 6 participants, reliability (agreement across group members) (Richardson, 1972) will decline quite rapidly, while above about 12, improvements in reliability will be subject to diminishing returns” (Murphy et al., 1998). Of course, the specific number of participants and balance of disciplines should also be influenced by the guideline’s focus. Decisions about which categories of participants to involve in the guideline development group are then required. Here, as suggested above, guideline developers often have to weigh desire for wide representation against need for cohesiveness and efficiency.

However, GDG composition typically is either not characterized by a multidisciplinary group or does not even allow for such characterization. Shaneyfelt and colleagues (1999), in their study of 279 guidelines representing a diversity of topics, demonstrated that only 26 percent of guidelines specify development group participants and their areas of expertise. In a complementary investigation of 431 guidelines authored by specialty societies, Grilli and colleagues (2000) discovered that 88 percent of guidelines did not explicitly describe the types of professionals involved in development. Only 28 percent showed evidence of participation of more than one discipline authoring the guideline (Grilli et al., 2000).

Group Processes

A range of professional, cultural, and psychological factors can influence the process and content of guideline development panel meetings (Pagliari et al., 2001). GDGs undergo a socialization process (Tuckman, 1965). For example, during the first few meetings, much attention may be paid to developing interpersonal relations, setting group goals, establishing norms of behavior, and defining explicit and implicit roles. Such group-related issues may need to be addressed before much progress can be made on developing clinical recommendations. Group decision making involves three phases: orientation (defining the problem), evaluation (discussion of decision alternatives), and control (deciding prevailing alternatives) (Bales and Strodtbeck, 1951). Ideal conditions for group decision making are those enabling views of all parties to be expressed and considered before reaching a recommendation acceptable to the majority (Pagliari et al., 2001).

Dysfunctional group processes unduly encourage minority or majority views that may result in invalid or unreliable recommendations. These processes include minority influence (a single member or minority of group members sway the majority, often by capitalizing on small divisions in the group), group polarization (group dynamic leads to more extreme decisions than members would make individually), and “groupthink” (members’ desire for unanimity trump objective appraisal of the evidence) (Pagliari et al., 2001). Multidisciplinary groups are particularly at risk here, as members vary in professional status, in the nature or depth of their specialist knowledge, and in their appreciation of roles and modus operandi of professional colleagues (Shekelle et al., 2010).

The risk of these biases can be reduced with careful planning and attention to small-group processes. The aim is to ensure that group processes fundamentally encourage inclusion of all opinions and grant adequate hearing to all arguments (Fretheim et al., 2006b). Although somewhat limited support exists for their effectiveness, informal and formal methods are available to assist in achieving these objectives. Moynihan and Henry surveyed international CPG or health technology assessment organizations and found that 42 percent claimed to apply formal consensus development methods (Moynihan and Henry, 2006). Burgers and coauthors discovered that 38 percent of guideline developers surveyed applied formal rather than informal methods to recommendation formulation (Burgers et al., 2003a).

Among informal approaches, a variety of strategies may be required to encourage positive group processes. Selection of the group leader is critical. Positive group leadership is characterized by an individual who is qualified and experienced in facilitation of optimal group processes (Fretheim et al., 2006b). The United Kingdom’s NICE asserts that this individual “needs to allow sufficient time for all members to express their views without feeling intimidated or threatened and should check that all members in the group agree to endorse any recommendations” (NICE, 2009, p. 42). Thus this individual preferably is not an expert or subspecialist in a particular clinical domain (NICE, 2009). Further, the Chair should be selected as someone who is neutral and who has enough expertise in coordinating groups of health professionals and patients/caregivers so that the appointment is acceptable to all (NICE, 2009).

A variant on this leadership form also has received support by guideline development leaders. Although one individual can be responsible for group process and task, if a group is especially large or the task is particularly complex, these support roles may be better divided between two persons, provided both they and the panel are clear about their differing functions (Shekelle et al., 2010). The committee suggests consideration of coleaders, such as a subspecialist and a generalist clinician, or coleaders representing differing clinical disciplines. Another informal approach to improving process centers on the role of technical experts. Technical support, typically found among researchers rather than clinicians, predominantly is required to identify and synthesize evidence, then present it to the GDG in a form allowing for derivation of recommendations. During guideline development, the technical expert should encourage the GDG to scrutinize the guideline repeatedly to guarantee its internal logic and clarity (Shekelle et al., 2010).

Several formal consensus development strategies are available to clinical practice guideline developers. Of these methods, the three most often applied include the Nominal Group Technique (NGT), the Delphi Method, and Consensus Conferences. These approaches reflect a variety of characteristics, including the use of questionnaires to elicit opinion, private elicitation of decisions, and formal feedback on group preferences. The specific character of their application varies in practice (Fretheim et al., 2006b).

As alluded to earlier, there remains a dearth of literature devoted to comparative analysis of the variety of formal and informal group process methods encouraging consensus development for producing guidelines. The comprehensive review by Murphy and coauthors and confirmatory work by Hutchings and Raine (2006) provide some insight into the relative merits of these strategies. In summary, this work suggests that formal methods generally perform as well or better than informal ones. However, the relative effectiveness of one formal method versus another remains an open question (Hutchings and Raine, 2006; Murphy et al., 1998). Finally, Shekelle and Schriger, in comparing formal and informal consensus approaches to development of CPGs to treat low back pain, determined that resultant guidelines were “qualitatively similar,” yet certain guideline statements arising from formal methods were relatively “more clinically specific” (Shekelle and Schriger, 1996). Overall, the committee believes that group process is enhanced by inclusion of all opinions relevant to a CPG, and adoption of informal (e.g., group leadership) or formal (e.g., NGT, Delphi Method) methods for ensuring effective group process.

Guideline development leaders argue that it may be appropriate that the cost of guideline development include support for the adoption of methods to increase optimal group functioning (the group process) and achievement of aims (the group task) (Grimshaw et al., 1995).

Patient and Public Involvement

The principals involved in CPG development, typically healthcare professionals and scientific experts, can benefit from the input of patients and the public for several reasons. First, as a matter of transparency, detailed in preceding content, the involvement of one or more consumer representatives provides a window into the process and some assurance that guidelines were not developed “behind closed doors” to suit special interests other than theirs.

Second, patients and laypersons bring perspectives that clinicians and scientists often lack, and require attention to be paid to those individuals most deeply affected by guidelines. This input is important not only in deciding what to recommend, but how to present recommendations in ways that are understandable to patients and respectful of their needs. A study by Devereaux et al. (2001) found that patients and physicians assign different outcome values to stroke versus adverse side effects of treatment. Specifically, “Patients at high risk for atrial fibrillation placed more value on the avoidance of stroke and less value on the avoidance of bleeding than did physicians who treat patients with atrial fibrillation” (Devereaux et al., 2001, p. 1). Sensitivity to what matters most to those living with disease provides important context for decisions about the balance of benefits and harms as well as gaps in scientific evidence.

Third, consumer involvement acts as a safeguard against conflicts of interest that may skew judgment of clinical and scientific experts. The ability of consumers to resist recommendations favoring self-interest of a specialty or research enterprise can be an important countermeasure to imbalance in practice guidelines. Williamson (1998) proposed three types of patient representatives, depending on the contributions and skills each can bring: (1) fellow patients or patient surrogates (e.g., parents, caretakers) who would mainly present their own views; (2) a member of a patient group who presents the organization’s position; and (3) patient advocates who present knowledge of patient views. A systematic review found that patients and the public did not differ in their preferences for hypothetical health states (Dolders et al., 2006). This finding suggests that consumer representation in GDGs may serve as a good proxy for patients.

However, involvement of laypersons in practice guideline development may be problematic, particularly if they lack relevant training and scientific literacy. Guideline development panel discussions should rely on clinical, technical, and methodological concepts, and terminology must be understood by consumer representative(s). In some instances explanation is not provided and the consumer representative is unable to follow the discussion. A second challenge occurs when a consumer representative has a personal experience with the disease or an advocacy role interfering with the ability to examine evidence and recommendations dispassionately. Such individuals may have difficulty divorcing their personal narrative or policy agenda from the systematic methods and analytic rules a GDG should follow. A panel’s orderly review of evidence and construction of recommendations can be sidelined by consumer representative objections and testimonials. The following findings emerged from observations of varying consumer participation methods in the North of England guideline development program. Individual patients who participated in a GDG contributed infrequently and had problems with the use of technical language. Although they contributed most in discussions of patient education, their contributions were not subsequently put into action. Within a “one off” or one-time meeting, participants again encountered problems with medical terminology and were most interested in sections on patient education and self-management. Their understanding of the use of scientific evidence to derive more cost-effective care practices was unclear (van Wersch and Eccles, 2001). Furthermore, a more recent study suggests that consumers hold many misconceptions about evidence-based health care, and are often skeptical of its value. In fact, one study reported that consumers largely believe that more care—and more expensive care—constitutes better care, and that medical guidelines are inflexible (Carman et al., 2010). These misconceptions may act as barriers to effective shared decision making.

To mitigate these concerns, as with any member of a practice guideline development panel, selection criteria should be applied to choose a consumer representative who can consider the evidence objectively, and make recommendations departing from preconceived views of self or interests. Little is known about how best to select consumers for such tasks, a survey of members of the Guideline International Network Patient and Public Involvement Working Group (G-I-N PUBLIC) concluded that “the paucity of process and impact evaluations limits our current understanding of the conditions under which patient and public involvement is most likely to be effective” (Boivin et al., 2010, p. 1), but several public and private efforts are under way to identify best systematic approaches.2 Identifying a consumer’s interests, experience, and skill subsets in order to match them to the needs of the guideline development group will increase the likelihood of success. Unlike health professionals, most consumers will not have economic incentives or support encouraging their participation. Like health professionals, consumers will respond to efforts that are well organized and led, respect their time and effort, and result in a meaningful outcome. Eventually systematic approaches to consumer involvement will improve the prevailing “opportunistic” approach.

In addition to a patient and consumer advocate on the GDG, some groups elicit patient and public perspectives as part of a larger stakeholder input exercise. For example, a GDG might invite patients or other laypersons to review draft documents or attend a meeting to share perspectives. GDGs can host an open forum in which various stakeholder groups, such as patients, payers, manufacturers, and professional associations, are afforded the opportunity to express their viewpoints, present scientific evidence relevant to the guideline, or raise concerns about the impact or implementation of proposed recommendations. The advantage of this approach is that it exposes the GDG to information it might overlook and provides stakeholders with a sense of “being heard,” while allowing the panel to have private deliberations. In the North of England study, the workshop format was relatively resource intensive, but made it possible to explain technical elements of guideline development, enabling patients to engage in the process and make relevant suggestions. A patient advocate serving on a panel felt confident enough to speak and was accustomed to discussions with health professionals and to medical terminology (van Wersch and Eccles, 2001).

NICE has developed comprehensive policies to include consumers in their CPG development process. NICE created a patient involvement unit that emphasizes elicitation of stakeholder organization commentary across the development process; patient and caregiver committee representation; patient focus groups, written testimonials, and interviews; dissemination and gathering of feedback regarding NICE guidance to patients by patients and patient organization implementers (Schünemann et al., 2006). Other organizations incorporating consumer and patient perspectives in guideline development processes are the Scottish Intercollegiate Guidelines Network (SIGN), and the UK National Health System Health Technology Assessment Program (Schünemann et al., 2006).

Few empirical accounts show attempts to involve consumers (Carver and Entwistle, 1999). Because frameworks for consumer involvement are based on limited practical experience (Bastian, 1996; Duff et al., 1993), there is little consensus about how and when to involve consumers and what to expect from them during guideline development (van Wersch and Eccles, 1999). In 2006, the WHO Advisory Committee on Health Research conducted a critical review of processes involving consumers in the development of guidelines to derive recommendations for improvement (Schünemann et al., 2007). Although Schünemann and colleagues (2006) identified no evidence for determining how best to involve consumers in CPG development, they did find support for approaches to consumer involvement in the scientific research process. More specifically, a study by Telford and colleagues (2004) identified eight principles for successful consumer involvement in research; specifically, they call for an open and explicit process in which consumers are knowledgeable and/or trained in understanding evidence and are included in all steps of the developmental process. Schünemann and colleagues suggested these findings might be relevant to involving consumers in developing CPGs (Schünemann et al., 2006).

Hazards exist when guidelines are developed without sensitivity to public reactions, especially when a topic may become contentious. Many guidelines with the strongest scientific logic have floundered publicly when recommendations or rationales were misunderstood or ridiculed by patients, the media, or politicians. Therefore, consumers should be involved in all stages of guideline development. Although this is possible, it is not straightforward, and there is a clear need for further work on how this can best be achieved. In whatever form it takes, consumer input is helpful in alerting GDGs to public sentiments, to the need for proper messaging, and to the optics and reception that await recommendations they fashion.

3.Guideline Development Group Composition
3.1The GDG should be multidisciplinary and balanced, comprising a variety of methodological experts and clinicians, and populations expected to be affected by the CPG.
3.2Patient and public involvement should be facilitated by including (at least at the time of clinical question formulation and draft CPG review) a current or former patient, and a patient advocate or patient/consumer organization representative in the GDG.
3.3Strategies to increase effective participation of patient and consumer representatives, including training in appraisal of evidence, should be adopted by GDGs.


The idea that trustworthy clinical practice guidelines should be based on a high-quality SR of the evidence is beyond dispute (ACCF and AHA, 2008; AGREE, 2003; AHRQ, 2008; NICE, 2009; Rosenfeld and Shiffman, 2009; SIGN, 2008). The committee defines a high-quality systematic review as one meeting those standards described by the IOM Committee on Standards for Systematic Reviews of Comparative Effectiveness Research. However, the manner by which GDGs obtain SRs is highly variable, ranging from conducting reviews “in-house,” to entering a relationship where the SR is conducted specifically to inform the CPG (with varying levels of interaction between the two groups), to an “asynchronous” arrangement where SR and CPG activities are independent of one another. In this instance, GDGs may use preexisting SRs to inform recommendations (asynchronous isolation model) or, as in the case of the National Institutes of Health Consensus Development Conference, work synchronously with an SR panel, but allow no interaction beyond the original clinical question(s) posed and final product delivered (ACCF and AHA, 2008; AHRQ, 2008; New Zealand Guidelines Group, 2001; NICE, 2009; NIH, 2010; Rosenfeld and Shiffman, 2009). Table 4-1 compares varying modes of interaction between systematic review teams and guideline developers and the membership, benefits, and concerns characteristic of each.

TABLE 4-1. Models of Interaction Between Clinical Practice Guideline (CPG) Groups and Systematic Review (SR) Teams.


Models of Interaction Between Clinical Practice Guideline (CPG) Groups and Systematic Review (SR) Teams.

The National Institutes of Health Consensus Development Conference believes interaction between SR and CPG panels mandates complete isolation of experts interpreting and rating the evidence from those formulating guideline recommendations to discourage the clinical experts from biasing the SR results (NIH, 2010). The committee is critical of the isolationist approach because it inhibits knowledge exchange between clinical content experts and methodologists, potentially degrading their abilities to appreciate the nuances of evidence and clinical questions pertinent to the formulation of recommendations.

The committee understands many GDGs of small professional societies review and rate evidence internally, as the interactive approach is infeasible for those with limited resources. However, these developers may not include methodological experts, and may lack training and skills in high-quality SR conduct. The committee believes that if required to maintain standards set by the IOM’s Committee on Standards for Systematic Reviews of Comparative Effectiveness Research, many such organizations would require alternate means to secure evidence in support of recommendations. The committee encourages small professional societies to partner with other guideline development organizations or use publicly funded SRs developed by the new federally funded private agency, the Patient-Centered Outcomes Research Institute (PCORI); PCORI is mentioned again in Chapter 7.

Organizations such as the USPSTF and Kidney Disease: Improving Global Outcomes contract with an outside systematic review team to support their CPG development, but work closely with SR methodologists throughout the SR. The emphasis here is on increased intersection at multiple critical points across the SR process. Hence, the model allows for interaction between systematic review teams and guideline developers in response to developers’ concerns during literature review related to clinical questions or study parameters. In addition, interpretation and rating of the evidence requires particularly close interaction between systematic review teams and GDGs, as does derivation of clinical recommendations.

The following elaborates the “complete interaction” model of guideline developers and SR methodologists and further specifies the nature of intersection. The committee believes that an ongoing, interactive relationship between systematic review teams and guideline developers will increase validity and trustworthiness of the guideline development process. At the same time, the committee is aware that many variants of this model may be suitable across differing CPG development contexts. Prior to the first meeting, and as needed throughout the process, methodologists from the SR team may provide training to guideline development members on topics such as literature selection and the rating of evidence and recommendations. Reciprocally, clinical experts from the guideline group assist SR methodologists on the nuances of clinical questions, selection criteria (e.g., varying biases across United States and European Union investigations), and interpretation of study design and results. By the first meeting of the GDG and SR methodologists, understanding and agreement on the above topics as well as scope (breadth and depth) of the SR and supportive resources should be reached. At the second meeting, SR team members should present their findings to the GDG and the teams should jointly interpret evidence and discuss rating its quality. At this point, guideline developers may request more information (e.g., observational data for subpopulations or for harms), and highlight subtleties in research findings overlooked by SR methodologists (e.g., need to assign greater weight to quality of provider, drug dose, or adherence issues than to allocation blinding), which may alter evidence interpretations. In the interim, SR members may refine evidence tables, perform additional analyses requested by guideline developers, and provide feedback on developers’ evidence interpretations. When incorporating any new findings and interpretations, guideline development and SR group members may discuss draft guidelines and clinical recommendations’ ratings at the final meeting. Overall, across this entire process, requests for data or discussion are bidirectional.

The committee thoughtfully deliberated the extent to which it felt justified prescribing a detailed CPG development methodology across all aspects of the development process, including the intersection of SR and CPG activities. As with any collaborative research enterprise there often is very subtle negotiation, among varying persuasions, regarding what shall be investigated and how. The committee decided a highly specified prescription was inappropriate given the emergent state-of-the-art of CPG development and a commitment to standards’ generalizablity.

4.Clinical Practice Guideline–Systematic Review Intersection
4.1Clinical practice guideline developers should use systematic reviews that meet standards set by the Institute of Medicine’s Committee on Standards for Systematic Reviews of Comparative Effectiveness Research.
4.2When systematic reviews are conducted specifically to inform particular guidelines, the GDG and systematic review team should interact regarding the scope, approach, and output of both processes.


Guideline development groups determine scope and logic (formulation of key clinical questions and outcomes) of CPGs in a variety of ways. Though the committee found no one approach rose to the level of a standard, it recognizes the importance of various associated components to the guideline development process. The committee therefore considered factors important in determining guideline scope, as well as the development of an analytical model to assist in identification of critical clinical questions and key outcomes, and exploration of the quality of varying evidence in a chain of reasoning.

Elaborating Scope

When elaborating guideline scope, GDG members need to consider a variety of clinical issues, including benefits and harms of different treatment options; identification of risk factors for conditions; diagnostic criteria for conditions; prognostic factors with and without treatment; resources associated with different diagnostic or treatment options; the potential presence of comorbid conditions; and patient experiences with healthcare interventions. These issues must be addressed in the context of a number of factors, including target conditions, target populations, practice settings, and audience (Shekelle et al., 2010).

Analytic Framework

To define which clinical questions must be answered to arrive at a recommendation, which types of evidence are relevant to the clinical questions, and by what criteria that evidence will be evaluated and lead to clinical recommendations, GDGs optimally specify a chain of reasoning or logic related to key clinical questions that need to be answered to produce a recommendation on a particular issue. Failure to do so may undermine the trustworthiness of guidelines by neglecting to define at the outset the outcomes of interest, specific clinical questions to be answered, and available evidence. The absence of these guideposts can become apparent as guideline development work unfolds. Failure to define key questions and failure to specify outcomes of interest and admissible evidence can result in wasted time, money, and staff resources to gather and analyze evidence irrelevant to recommendations. Poorly defined outcomes can obscure important insights in the evidence review process, resulting in incomplete or delayed examination of relevant evidence. Disorganized analytic approaches may result in the lack of a crisp, well-articulated explanation of the recommendations’ rationale. Poorly articulated or indirect evidence chains can make it difficult to discern which parts of the analytic logic are based on science or opinion, the quality of that evidence, and how it was interpreted. Readers can be misled into thinking that there is more (or less) scientific support for recommendations than actually exists. The ambiguity can also cause difficulty in establishing research priorities (Shekelle et al., 2010; Weinstein and Fineberg, 1980).

The visual analytic framework described here is one of a variety of potential approaches; the particular model is less important than the principles on which it is based. These principles include the need for guideline developers to take the following actions: (1) make explicit decisions at the outset of the analytic process regarding the clinical questions that need to be answered and the patient outcomes that need to be assessed in order to formulate a recommendation on a particular issue; (2) have a clear understanding of the logic underlying each recommendation; (3) use the analytic model for keeping the GDG “on track”; (4) be explicit about types of evidence or opinion, as well as the value judgments supporting each component of the analytic logic; and (5) transmit this information with clarity in the guideline’s rationale statement (discussed hereafter).

Explication of Outcomes

Guideline developers must unambiguously define outcomes of interest and the anticipated timing of their occurrence. Stating that a practice is “clinically effective” is insufficient. Specification of the outcomes (including magnitude of intervention benefits and harms) and time frames in which they are expected to occur, as reflected in a clinical recommendation, is required. The GDG must decide which health outcomes or surrogate outcomes will be considered. A health outcome, which can be acute, intermediate, or long term, refers to direct measures of health status, including indicators of physical morbidity (e.g., dyspnea, blindness, functional status, hospitalization), emotional well-being (e.g., depression, anxiety), and mortality (e.g., survival, life expectancy). Eddy defines these as “outcomes that people experience (feel physically or mentally) and care about” (Eddy, 1998, p. 10). This is a critical area for serious consideration of consumer input. Health outcomes are the preferred metric, but surrogate outcomes are sometimes used as proxies for health outcomes. Surrogate outcomes are often physiologic variables, test results, or other measures that are not themselves health outcomes, but that have established pathophysiologic relationships with those outcomes. The validity of a surrogate endpoint must be well established in order to accept it as a proxy for a health outcome endpoint. For example, for AIDS, the need for ventilator support, loss of vision, and death would be acute, intermediate, and long-term outcomes respectively, while increased CD4 cell counts or decreased viral-load measures represent surrogate outcomes (Fleming and DeMets, 1996). Guideline developers must determine which of these outcome classes must be affected to support a recommendation.

One Example of Guideline Logic: The Analytic Graphical Model

These potentially complex interrelationships can be visualized in a graphic format. A recent example of an analytic framework (Figure 4-1) was developed by the USPSTF in consideration of its guideline for osteoporosis screening (Nelson et al., 2010).

FIGURE 4-1. Analytic framework and KQs.


Analytic framework and KQs. NOTE: KQ = key question.

This diagrammatic approach, first described in the late 1980s, emerged from earlier advances in causal pathways (Battista and Fletcher, 1988), causal models (Blalock, 1985), influence diagrams (Howard and Matheson, 1981), and evidence models (Woolf, 1991).

Construction of the diagram begins with listing the outcomes the GDG has identified as important. This list of benefits and harms reflects key criteria the development group must address in arriving at a recommendation. Surrogate outcomes considered reliable and valid outcome indicators may then be added to the diagram. The interconnecting lines, or linkages, appearing in Figure 4-1 represent critical premises in logic or reasoning that require confirmation by evidence review to support related recommendations. KQ1 is the overarching question—does risk factor assessment or bone measurement testing lead to reduced fracture-related morbidity and mortality? KQ2 (Is the patient “low risk” or “high risk” for fracture-related morbidity and mortality?), KQ3 (If a patient is “high risk” for fracture-related morbidity and mortality are bone measurement test results normal or abnormal?), KQ4 (If a patient is “high risk” for fracture-related morbidity and mortality, do harms associated with bone measurement testing outweigh benefits?), KQ5 (If patient bone measurement testing is abnormal, will treatment result in reduced fractures?), and KQ6 (If patient bone measurement is abnormal, do treatment harms outweigh benefits?) are questions about intermediate steps along the guideline logic or reasoning path concerning the accuracy of risk factor assessment and bone measurement testing, and potential benefits and harms of testing and treatment of persons identified as abnormal (Shekelle et al., 2010).

Specification of the presumed relationships among acute, intermediate, long-term, and surrogate outcomes in a visual analytic model serves a number of useful purposes. It forces guideline developers to make explicit, a priori decisions about outcomes of interest in the derivation of a recommendation. It allows others to judge whether important outcomes are overlooked (Harris et al., 2001). It makes explicit a development group’s judgments regarding the validity of various indicators of outcome. The proposed interrelationships depicted in the diagram reveal group members’ assumptions pertinent to pathophysiologic relationships. They also allow others to make a general determination of whether the correct questions were asked at the outset (IOM, 2008).

Filling in the Evidence

Linkages in the visual reasoning model provide a “road map” to guide the evidence review. They specify a list of questions that must be answered to derive recommendations. This focused approach, in which evidence review is driven by key questions, is more efficient than broad reviews of a guideline topic. A common error among guideline developers is to conduct an amorphous literature search with broad inclusion criteria. Because hundreds to thousands of data sources usually are available on any guideline topic, such an approach often retrieves many irrelevant citations. A targeted approach is more expeditious, less costly, and directed only to the specific issues that are priorities to be addressed in confirming the rationale for recommendations (AHRQ, 2009; Slavin, 1995).

In addition to defining questions to be answered in the literature review, linkages in the analytic framework keep the review process on track. Linkages serve as placeholders for documenting whether supporting evidence has been uncovered for a particular linkage and the nature of that evidence. By identifying which linkages have been “filled in” with evidence, the analytic framework provides a flowchart for tracking progress in evidence identification. It also serves as a checklist to ensure that important outcomes of interest are not neglected in the evidence review process (Harris et al., 2001).

Although the linkages define questions to be answered and provide placeholders for documenting results, they do not define the quality of evidence or its implications for recommendations. However, this graphical exercise may serve as a preliminary foundation for deriving clinical recommendations. Scanning linkages in the model directs CPG developers to each of the specific components of their reasoning that require evidence in support of recommendations, an assessment of the quality of that evidence, and an appraisal of the strength of a recommendation that can be made. The complexity of the quality of evidence and strength of recommendation appraisal activities is discussed fully in Chapter 5.

With regard to the greater state of the art of CPGs, the analytic model highlights most important outcomes that, depending on the quality of available evidence, require consideration by future investigators in establishing effectiveness of a clinical practice and the demand for guidelines. This information is essential, in an era of limited research resources, to establish priorities and direct outcomes research to fundamental questions. Finally, outcomes identified in the analytic model also provide a template for evaluating effects of guidelines on quality of care (Shekelle et al., 2010).

The Rationale Statement

The composition of a clear rationale statement is facilitated by the analytic framework. The rationale statement summarizes the benefits and harms considered in deriving the recommendation, and why the outcomes were deemed important (including consideration of patient preferences); the GDG’s assumptions about relationships among all health and surrogate outcomes; and the nature of evidence upholding linkages. If the review uncovered linkages lacking supportive evidence, the rationale statement can speak to the role that opinion, theory, or clinical experience may play in arriving at a recommendation. The rationale statement may thereby provide clinicians, policy makers, and other guideline users with credible insight into underlying model assumptions. It also avoids misleading generalizations about the evidence, such as claiming a clinical practice is supported by “randomized controlled trials” when such evidence supports only one linkage in the analytic model. By sharing the blueprint for recommendations, the linkages in the analytic logic allow various developers to identify pivotal assumptions about which they disagree (Shekelle et al., 2010).


  • ACCF and AHA (American College of Cardiology Foundation and American Heart Association). 2008. Methodology manual for ACCF/AHA guideline writing committees. In Methodologies and policies from ACCF/AHA Taskforce on Practice Guidelines . ACCF and AHA.
  • AGREE (Appraisal of Guidelines for Research & Evaluation). 2001. Appraisal of Guidelines for Research & Evaluation (AGREE) instrument.
  • AGREE. 2003. Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: The AGREE project. Quality and Safety in Health Care 12(1):18–23. [PMC free article: PMC1743672] [PubMed: 12571340]
  • AHRQ (Agency for Healthcare Research and Quality). 2008. U.S. Preventive Services Task Force procedure manual . AHRQ Pub. No. 08-05118-ef. http://www​​/uspstf08/methods/procmanual.htm (accessed February 12, 2009).
  • AHRQ. 2009. Methods guide for comparative effectiveness reviews (accessed January 23, 2009).
  • Als-Nielsen, B., W. Chen, C. Gluud, and L. L. Kjaergard. 2003. Association of funding and conclusions in randomized drug trials: A reflection of treatment effect or adverse events? JAMA 290:921–928. [PubMed: 12928469]
  • American Academy of Pediatrics. 2008. Toward transparent clinical policies. Pediatrics 121(3):643–646. [PubMed: 18310217]
  • American Academy of Physicians Assistants. 1997. Policy brief: Clinical practice guidelines. http://www​ (accessed May 21, 2007).
  • Bales, R. F., and F. L. Strodtbeck. 1951. Phases in group problem-solving. Journal of Abnormal Social Psychology 46(4):485–495. [PubMed: 14880365]
  • Bastian, H. 1996. Raising the standard: Practice guidelines and consumer participation. International Journal of Quality Health Care 8(5):485–490. [PubMed: 9117202]
  • Battista, R. N., and S. W. Fletcher. 1988. Making recommendations on preventive practices: Methodological issues. American Journal of Preventive Medicine 4(4 Suppl):53–67; discussion 68–76. [PubMed: 3079142]
  • Blalock, H. J., editor. , ed. 1985. Causal models in the social sciences, 2nd ed. Chicago. IL: Aldine.
  • Boivin, A., K. Currie, B. Fervers, J. Gracia, M. James, C. Marshall, C. Sakala, S. Sanger, J. Strid, V. Thomas, T. van der Weijden, R. Grol, and J. Burgers. 2010. Patient and public involvement in clinical guidelines: International experiences and future perspectives. Quality and Safety in Health Care 19(5):e22. [PubMed: 20427302]
  • Boyd, E. A., and L. A. Bero. 2000. Assessing faculty financial relationships with industry: A case study. JAMA 284(17):2209–2214. [PubMed: 11056592]
  • Burgers, J., R. Grol, N. Klazinga, M. Makela, J. Zaat, and AGREE Collaboration. 2003. a. Towards evidence-based clinical practice: An international survey of 18 clinical guideline programs. International Journal on Quality Health Care 15(1):31–45. [PubMed: 12630799]
  • Burgers, J. S., R. P. Grol, J. O. Zaat, T. H. Spies, A. K. van der Bij, and H. G. Mokkink. 2003. b. Characteristics of effective clinical guidelines for general practice. British Journal of General Practice 53(486):15–19. [PMC free article: PMC1314503] [PubMed: 12569898]
  • Campbell, E. G. 2007. Doctors and drug companies—scrutinizing influential relationships. New England Journal of Medicine 357(18):1796–1797. [PubMed: 17978288]
  • Carman, K. L., M. Maurer, J. M. Yegian, P. Dardess, J. McGee, M. Evers, and K. O. Marlo. 2010. Evidence that consumers are skeptical about evidence-based health care. Health Affairs 29(7):1400–1406. [PubMed: 20522522]
  • Carver, A., and V. Entwistle. 1999. Patient involvement in sign guideline development groups. Edinburgh, Scot.: Scottish Association of Health Councils.
  • Chaudhry, S., S. Schroter, R. Smith, and J. Morris. 2002. Does declaration of competing interests affect readers’ perceptions? A randomised trial. BMJ 325(7377):1391–1392. [PMC free article: PMC138516] [PubMed: 12480854]
  • Dana, J. 2003. Harm avoidance and financial conflict of interest. Journal of Medical Ethics Online Electronic Version:1–18.
  • Devereaux, P. J., D. R. Anderson, M. J. Gardner, W. Putnam, G. J. Flowerdew, B. F. Brownell, S. Nagpal, and J. L. Cox. 2001. Differences between perspectives of physicians and patients on anticoagulation in patients with atrial fibrillation: Observational study. BMJ 323(7323):1218–1221. [PMC free article: PMC59994] [PubMed: 11719412]
  • Dolders, M. G. T., M. P. A. Zeegers, W. Groot, and A. Ament. 2006. A meta-analysis demonstrates no significant differences between patient and population preferences. Journal of Clinical Epidemiology 59(7):653–664. [PubMed: 16765267]
  • Duff, L. A., M. Kelson, S. Marriott, A. Mcintosh, S. Brown, J. Cape, N. Marcus, and M. Traynor. 1993. Clinical guidelines: Involving patients and users of services. British Journal of Clinical Governance 1(3):104–112.
  • Eddy, D. 1998. Performance measurement: Problems and solutions. Health Affairs 17(4):7–25. [PubMed: 9691542]
  • Eichacker, P. Q., C. Natanson, and R. L. Danner. 2006. Surviving sepsis—practice guidelines, marketing campaigns, and Eli Lilly. New England Journal of Medicine 355(16):1640–1642. [PubMed: 17050887]
  • Fleming, T. R., and D. L. DeMets. 1996. Surrogate end points in clinical trials: Are we being misled? Annals of Internal Medicine 125(7):605–613. [PubMed: 8815760]
  • Fretheim, A., H. J. Schünemann, and A. D. Oxman. 2006. a. Improving the use of research evidence in guideline development: Group composition and consultation process. Health Research Policy and Systems 4:15. [PMC free article: PMC1702349] [PubMed: 17134482]
  • Fretheim, A., H. J. Schünemann, and A. D. Oxman. 2006. b. Improving the use of research evidence in guideline development: Group processes. Health Research Policy and Systems 4:17. [PMC free article: PMC1702534] [PubMed: 17140442]
  • Friedman, P. J. 2002. The impact of conflict of interest on trust in science. Science and Engineering Ethics 8(3):413–420. [PubMed: 12353371]
  • Gardner, B., R. Davidson, J. McAteer, and S. Michie. 2009. A method for studying decision-making by guideline development groups. Implementation Science 4(1):48. [PMC free article: PMC2731071] [PubMed: 19656366]
  • Grilli, R., N. Magrini, A. Penna, G. Mura, and A. Liberati. 2000. Practice guidelines developed by specialty societies: The need for a critical appraisal. The Lancet 355(9198):103–106. [PubMed: 10675167]
  • Grimshaw, J., M. Eccles, and I. Russell. 1995. Developing clinically valid practice guidelines. Journal of Evaluation in Clinical Practice 1(1):37–48. [PubMed: 9238556]
  • Guyatt, G., E. A. Akl, J. Hirsh, C. Kearon, M. Crowther, D. Gutterman, S. Z. Lewis, I. Nathanson, R. Jaeschke, and H. Schünemann. 2010. The vexing problem of guidelines and conflict of interest: A potential solution. Annals of Internal Medicine 152(11):738–741. [PubMed: 20479011]
  • Harris, R. P., M. Helfand, S. H. Woolf, K. N. Lohr, C. D. Mulrow, S. M. Teutsch, and D. Atkins. 2001. Current methods of the U.S. Preventive Services Task Force: A review of the process. American Journal of Preventive Medicine 20(3 Suppl):21–35. [PubMed: 11306229]
  • Howard, R., editor; , and J. Matheson, editor. , eds. 1981. Readings on the principles and applications of decision analysis . Menlo Park, CA: Strategic Decisions Group.
  • Hutchings, A., and R. Raine. 2006. A systematic review of factors affecting the judgments produced by formal consensus development methods in health care. Journal of Health Services Research and Policy 11(3):172–179. [PubMed: 16824265]
  • IOM (Institute of Medicine). 1990. Clinical practice guidelines: Directions for a new program. Edited by M. J. Field, editor; and K. N. Lohr, editor. . Washington, DC: National Academy Press.
  • IOM. 1992. Guidelines for clinical practice: From development to use. Edited by M. J. Field, editor; and K. N. Lohr, editor. . Washington, DC: National Academy Press.
  • IOM. 2008. Knowing what works in health care: A roadmap for the nation. Edited by J. Eden, editor; , B. Wheatley, editor; , B. McNeil, editor; , and H. Sox, editor. . Washington, DC: The National Academies Press.
  • IOM. 2009. Conflict of interest in medical research, education, and practice. Edited by B. Lo, editor; and M. J. Field, editor. . Washington, DC: The National Academies Press.
  • Jacobs, A. K., B. D. Lindsay, B. J. Bellande, G. C. Fonarow, R. A. Nishimura, P. M. Shah, B. H. Annex, V. Fuster, R. J. Gibbons, M. J. Jackson, and S. H. Rahimtoola. 2004. Task force 3: Disclosure of relationships with commercial interests: Policy for educational activities and publications. Journal of American College of Cardiology 44(8):1736–1740. [PubMed: 15489117]
  • Kahan, J. P., R. E. Park, L. L. Leape, S. J. Bernstein, L. H., Hilborne, L. Parker, C. J. Kamberg, D. J. Ballard, and R. H. Brook. 1996. Variations by specialty in physician ratings of the appropriateness and necessity of indications for procedures. Medical Care 34(6):512–523. [PubMed: 8656718]
  • Lau, J. 2010. Models of interaction between clinical practice guidelines (CPG) groups and systematic review (SR) teams. Presented at IOM Committee on Standards for Developing Trustworthy Clinical Practice Guidelines meeting, January 12, Washington, DC.
  • Lexchin, J., L. A. Bero, B. Djulbegovic, and O. Clark. 2003. Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. BMJ 326(7400):1167–1170. [PMC free article: PMC156458] [PubMed: 12775614]
  • Lomas, J. 1993. Making clinical policy explicit. Legislative policy making and lessons for developing practice guidelines. International Journal of Technology Assessment in Health Care 9(1):11–25. [PubMed: 8423109]
  • Lurie, P., C. M. Almeida, N. Stine, A. R. Stine, and S. M. Wolfe. 2006. Financial conflict of interest disclosure and voting patterns at Food and Drug Administration drug advisory committee meetings. JAMA 295(16):1921–1928. [PubMed: 16639051]
  • Moreira, T. 2005. Diversity in clinical guidelines: The role of repertoires of evaluation. Social Science and Medicine 60(9):1975–1985. [PubMed: 15743648]
  • Moreira, T., C. May, J. Mason, and M. Eccles. 2006. A new method of analysis enabled a better understanding of clinical practice guideline development processes. Journal of Clinical Epidemiology 59(11):1199–1206. [PubMed: 17027431]
  • Moynihan, R., and D. Henry. 2006. The fight against disease mongering: Generating knowledge for action. PLoS Medicine 3(4):e191. [PMC free article: PMC1434508] [PubMed: 16597180]
  • Murphy, E., R. Dingwall, D. Greatbatch, S. Parker, and P. Watson. 1998. Qualitative research methods in health technology assessment: A review of the literature. Health Technology Assessment 2(16):vii–260. [PubMed: 9919458]
  • Nelson, H. D., E. M. Haney, T. Dana, C. Bougatsos, and R. Chou. 2010. Screening for osteoporosis: An update for the U.S. Preventive Services Task Force. Annals of Internal Medicine 153(2):99–111. [PubMed: 20621892]
  • New Zealand Guidelines Group. 2001. Handbook for the preparation of explicit evidence-based clincal practice guidelines. http://www​ (accessed August 26, 2009).
  • NICE (National Institute for Health and Clinical Excellence). 2008. A code of practice for declaring and dealing with conflicts of interest . London, UK: NICE.
  • NICE. 2009. Methods for the development of NICE public health guidance, 2nd ed. London, UK: NICE.
  • NIH (National Institutes of Health). 2010. About the consensus development program . http://consensus​ (accessed July 20, 2010).
  • Pagliari, C., and J. Grimshaw. 2002. Impact of group structure and process on multidisciplinary evidence-based guideline development: An observational study. Journal of Evaluation in Clinical Practice 8(2):145–153. [PubMed: 12180363]
  • Pagliari, C., J. Grimshaw, and M. Eccles. 2001. The potential influence of small group processes on guideline development. Journal of Evaluation in Clinical Practice 7(2):165–173. [PubMed: 11489041]
  • Richardson, F. M. 1972. Peer review of medical care. Medical Care 10(1):29–39. [PubMed: 5007750]
  • Rosenfeld, R., and R. N. Shiffman. 2009. Clinical practice guideline development manual: A quality-driven approach for translating evidence into action. Otolaryngology–Head & Neck Surgery 140(6 Suppl 1):1–43. [PMC free article: PMC2851142] [PubMed: 19464525]
  • Rothman, D. J., W. J. McDonald, C. D. Berkowitz, S. C. Chimonas, C. D. DeAngelis, R. W. Hale, S. E. Nissen, J. E. Osborn, J. H. Scully, Jr., G. E. Thomson, and D. Wofsy. 2009. Professional medical associations and their relationships with industry: A proposal for controlling conflict of interest. JAMA 301(13):1367–1372. [PubMed: 19336712]
  • Schünemann, H. J., A. Fretheim, and A. D. Oxman. 2006. Improving the use of research evidence in guideline development: Integrating values and consumer involvement. Health Research Policy and Systems 4:22. [PMC free article: PMC1697808] [PubMed: 17147811]
  • Schünemann, H. J., S. R. Hill, M. Kakad, G. E. Vist, R. Bellamy, L. Stockman, T. F. Wisloff, C. Del Mar, F. Hayden, T. M. Uyeki, J. Farrar, Y. Yazdanpanah, H. Zucker, J. Beigel, T. Chotpitayasunondh, T. H. Tran, B. Ozbay, N. Sugaya, and A. D. Oxman. 2007. Transparent development of the WHO rapid advice guidelines. PLoS Medicine 4(5):0786–0793. [PMC free article: PMC1877972] [PubMed: 17535099]
  • Schünemann, H. J., M. Osborne, J. Moss, C. Manthous, G. Wagner, L. Sicilian, J. Ohar, S. McDermott, L. Lucas, and R. Jaeschke. 2009. An official American Thoracic Society policy statement: Managing conflict of interest in professional societies. American Journal of Respiratory Critical Care Medicine 180(6):564–580. [PubMed: 19734351]
  • Shaneyfelt, T., M. Mayo-Smith, and J. Rothwangl. 1999. Are guidelines following guidelines? The methodological quality of clinical practice guidelines in the peer-reviewed medical literature. JAMA 281:1900–1905. [PubMed: 10349893]
  • Shekelle, P. G., and D. L. Schriger. 1996. Evaluating the use of the appropriateness method in the Agency for Health Care Policy and Research clinical practice guideline development process. Health Services Research 31(4):453–468. [PMC free article: PMC1070131] [PubMed: 8885858]
  • Shekelle, P. G., S. H. Woolf, M. Eccles, and J. Grimshaw. 1999. Clinical guidelines. Developing guidelines. BMJ 318(7183):593–596. [PMC free article: PMC1115034] [PubMed: 10037645]
  • Shekelle, P. G., H. Schünemann, S. H. Woolf, M. Eccles, and J. Grimshaw. 2010. State of the art of CPG development and best practice standards. In Committee on Standards for Developing Trustworthy Clinical Practice Guidelines comissioned paper.
  • SIGN (Scottish Intercollegiate Guidelines Network), ed. 2008. SIGN 50: A guideline developer’s handbook. Edinburgh, Scot.: SIGN.
  • Slavin, R. E. 1995. Best evidence synthesis: An intelligent alternative to meta-analysis. Journal of Clinical Epidemiology 48(1):9–18. [PubMed: 7853053]
  • Sniderman, A. D., and C. D. Furberg. 2009. Why guideline-making requires reform. JAMA 301(4):429–431. [PubMed: 19176446]
  • Taylor, I. 2005. Academia’s “misconduct” is acceptable to industry. Nature 436(7051): 626. [PubMed: 16079818]
  • Telford, R., J. D. Boote, and C. L. Cooper. 2004. What does it mean to involve consumers successfully in NHS research? A consensus study. Health Expectations 7(3):209–220. [PMC free article: PMC5060237] [PubMed: 15327460]
  • Tuckman, B. W. 1965. Developmental sequence in small groups. Psychology Bulletin 63:384–399. [PubMed: 14314073]
  • van Wersch, A., and M. Eccles. 1999. Patient involvement in evidence-based health in relation to clinical guidelines. In The evidence-based primary care handbook. Edited by M. Gabbay, editor. . London, UK: Royal Society of Medicine Press Ltd. Pp.91–103.
  • van Wersch, A., and M. Eccles. 2001. Involvement of consumers in the development of evidence based clinical guidelines: Practical experiences from the north of England evidence based guideline development programme. Quality in Health Care 10(1):10–16. [PMC free article: PMC1743421] [PubMed: 11239138]
  • Weinstein, M. C., and H. V. Fineberg. 1980. Clinical decision analysis. Philadelphia, PA: W. B. Saunders.
  • WHO (World Health Organization). 2008. WHO handbook for guideline development . Geneva, Switz.: WHO.
  • Williamson, C. 1998. The rise of doctor–patient working groups. BMJ 317(7169):1374–1377. [PMC free article: PMC1114254] [PubMed: 9812941]
  • Woolf, S. 1991. AHCPR interim manual for clinical practice guideline development . AHCPR Pub. No. 91-0018. Rockville, MD: U.S. Department of Health and Human Services.



The committee did not inspect whether GDGs followed policies on transparency set in place by their parent organizations (i.e., did AAP guidelines meet their own standard on transparency).


AHRQ is currently funding the “Community Forum” effort via a contract with the American Institutes of Research. The Forum provides funding for research on how to organize and deliver diverse stakeholder input into comparative effectiveness research. Several states (Oregon, Washington, Wisconsin, and others) have long involved consumers in activities related to the evaluation of evidence in health decisions. Disease organizations—especially those involving AIDS and Breast Cancer—have demonstrated they can select and train consumers to be effective contributors in guideline processes. Personal communication, J. Santa. 2010. Consumers Union (December 22, 2010).

Copyright 2011 by the National Academy of Sciences. All rights reserved.
Bookshelf ID: NBK209537


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.4M)

Related information

  • PMC
    PubMed Central citations
  • PubMed
    Links to PubMed

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...