Home > Methods - Drug Class Review:...

PubMed Health. A service of the National Library of Medicine, National Institutes of Health.

McDonagh MS, Peterson K, Thakurta S, et al. Drug Class Review: Pharmacologic Treatments for Attention Deficit Hyperactivity Disorder: Final Update 4 Report [Internet]. Portland (OR): Oregon Health & Science University; 2011 Dec.

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.


Literature Search

To identify relevant citations, we searched the Cochrane Central Register of Controlled Trials (2nd Quarter 2011), Cochrane Database of Systematic Reviews (2005 to June 2011), MEDLINE (1996 to June l Week 4 20011), and PsycINFO (1806 to June Week 4 2011) using terms for included drugs, indications, and study designs (see Appendix D for complete search strategies). We have attempted to identify additional studies through searches of reference lists of included studies and reviews, including the US Food and Drug Administration Center for Drug Evaluation and Research website for medical and statistical reviews of individual drug products. Finally, we requested dossiers of published and unpublished information from the relevant pharmaceutical companies for this review. All received dossiers were screened for studies or data not found through other searches. All citations were imported into an electronic database (Endnote®X2, Thomson Reuters).

Study Selection

Selection of included studies was based on the inclusion criteria created by the Drug Effectiveness Review Project participants. Two reviewers independently assessed titles and abstracts of citations identified through literature searches for inclusion using the criteria below. Full-text articles of potentially relevant citations were retrieved and again were assessed for inclusion by both reviewers. Disagreements were resolved by consensus. Results published only in abstract form were not included because inadequate details were available for quality assessment.

Data Abstraction

We abstracted information on population characteristics, interventions, subject enrollment, and discontinuation and results for efficacy, effectiveness, and harms outcomes for trials, observational studies, and systematic reviews. We recorded intent-to-treat results when reported. If true intent-to-treat results were not reported, but loss to follow-up was very small, we considered these results to be intent-to-treat results. In cases where only per-protocol results were reported, we calculated intent-to-treat results if the data for these calculations were available.

Validity Assessment

We assessed the internal validity (quality) of trials based on the predefined criteria (see www.ohsu.edu/drugeffectiveness). These criteria are based on the United States Preventive Services Task Force and the National Health Service Centre for Reviews and Dissemination (U.K.) criteria.21, 22 We rated the internal validity of each trial based on the methods used for randomization, allocation concealment, and blinding; the similarity of compared groups at baseline; maintenance of comparable groups; adequate reporting of dropouts, attrition, crossover, adherence, and contamination; loss to follow-up; and the use of intent-to-treat analysis. Trials that had a fatal flaw in 1 or more category were rated “poor quality”; trials that met all criteria were rated “good quality”; the remainder were rated “fair quality.”

As the fair-quality category is broad, studies with this rating vary in their strengths and weaknesses: the results of some fair-quality studies are likely to be valid, while others are only possibly valid. A poor-quality trial is not valid—the results are at least as likely to reflect flaws in the study design as the true difference between the compared drugs. A fatal flaw is reflected by failure to meet combinations of items of the quality assessment checklist. A particular randomized trial might receive 2 different ratings, one for effectiveness and another for adverse events. The criteria used to rate observational studies of adverse events reflect aspects of the study design that are particularly important for assessing adverse event rates. We rated observational studies as good quality for adverse event assessment if they adequately met 6 or more of the 7 predefined criteria, fair quality if they met 3 to 5 criteria, and poor quality if they met 2 or fewer criteria.

Included systematic reviews were also rated for quality based on a clear statement of the questions(s), inclusion criteria, adequacy of search strategy, validity assessment and adequacy of detail provided for included studies, and appropriateness of the methods of synthesis.

Included systematic reviews were also rated for quality. We rated the internal validity based on a clear statement of the questions(s); reporting of inclusion criteria; methods used for identifying literature (the search strategy), validity assessment, and synthesis of evidence; and details provided about included studies. Again, these studies were categorized as good when all criteria were met.

Grading the Strength of Evidence

We graded strength of evidence based on the guidance established for the Evidence-based Practice Center Program of the Agency for Healthcare Research and Quality.23 Developed to grade the overall strength of a body of evidence, this approach incorporates 4 key domains: risk of bias (includes study design and aggregate quality), consistency, directness, and precision of the evidence. It also considers other optional domains that may be relevant for some scenarios, such as a dose-response association, plausible confounding that would decrease the observed effect, strength of association (magnitude of effect), and publication bias.

Table 2 describes the grades of evidence that can be assigned. Grades reflect the strength of the body of evidence to answer key questions on the comparative effectiveness, efficacy, and harms of attention deficit hyperactivity disorder (ADHD) drugs. Grades do not refer to the general efficacy or effectiveness of pharmaceuticals. Two reviewers independently assessed each domain for each outcome and differences were resolved by consensus.

Table 2. Definitions of the grades of overall strength of evidence.

Table 2

Definitions of the grades of overall strength of evidence.

Strength of evidence was graded for each key outcome measure, and was limited to head-to-head comparisons except where a case can be made for assessing the strength of indirect evidence. Outcomes selected for rating the strength of evidence were symptom improvement, response, and withdrawal due to adverse events. Appendix E shows individual assessments for strength of evidence.

Effectiveness Compared With Efficacy

Throughout this report, we highlight effectiveness studies conducted in primary care or office-based settings that use less stringent eligibility criteria, assess health outcomes, and have longer follow-up periods than most efficacy studies. The results of effectiveness studies are more applicable to the “average” patient than results from highly selected populations in efficacy studies. Examples of “effectiveness” outcomes include quality of life, global measures of academic success, and the ability to work or function in social activities. These outcomes are more important to patients, family, and care providers than surrogate or intermediate measures such as scores based on psychometric scales.

An evidence report pays particular attention to the generalizability of efficacy studies performed in controlled or academic settings. Efficacy studies provide the best information about how a drug performs in a controlled setting, allowing for better control over potential confounding factors and biases. However, the results of efficacy studies are not always applicable to many, or to most, patients seen in everyday practice. This is because most efficacy studies use strict eligibility criteria which may exclude patients based on their age, sex, medication compliance, or severity of illness. For many drug classes severely impaired patients are often excluded from trials. Often, efficacy studies also exclude patients who have “comorbid” diseases, meaning diseases other than the one under study. Efficacy studies may also use dosing regimens and follow-up protocols that may be impractical in other practice settings. They often restrict options, such as combining therapies or switching drugs that are of value in actual practice. They often examine the short-term effects of drugs that, in practice, are used for much longer periods of time. Finally, they tend to use objective measures of effect that do not capture all of the benefits and harms of a drug or do not reflect the outcomes that are most important to patients and their families.

Data Synthesis

We constructed evidence tables showing the study characteristics, quality ratings, and results for all included studies. We reviewed studies using a hierarchy of evidence approach, where the best evidence is the focus of our synthesis for each question, population, intervention, and outcome addressed. Studies that evaluated one pharmacologic treatment of ADHD against another provided direct evidence of comparative effectiveness and adverse event rates. Head-to-head evidence is the primary focus. Outcomes of changes in symptoms measured using scales or tools with good validity and reliability are preferred over scales or tools with low validity/reliability or no reports of validity/reliability testing. Direct comparisons were preferred over indirect comparisons; similarly, effectiveness and long-term safety outcomes were preferred to efficacy and short-term tolerability outcomes. In theory, trials that compare these drugs to other interventions or placebos can also provide evidence about effectiveness. This is known as an indirect comparison and can be difficult to interpret for a number of reasons, primarily heterogeneity of trial populations, interventions, and outcomes assessment. Data from indirect comparisons are used to support direct comparisons, where they exist, and are used as the primary comparison where no direct comparisons exist. Indirect comparisons should be interpreted with caution.

Quantitative analyses were conducted using meta-analyses of outcomes reported by a sufficient number of studies that were homogeneous enough that combining their results could be justified. In order to determine whether meta-analysis could be meaningfully performed, we considered the quality of the studies and the heterogeneity among studies in design, patient population, interventions, and outcomes. When meta-analysis could not be performed, the data were summarized qualitatively.

Peer Review

We requested and received peer review of the report from 2 content and methodology experts. Their comments were reviewed and, where possible, incorporated into the final document. All comments and the authors' proposed actions were reviewed by representatives of the participating organizations of the Drug Effectiveness Review Project before finalization of the report. Names of peer reviewers for the Drug Effectiveness Review Project are listed at http://www.ohsu.edu/xd/research/centers-institutes/evidence-based-policy-center/derp/index.cfm/.

Public Comment

This report was posted to the Drug Effectiveness Review Project website for public comment. We received comments from 6 individuals representing 5 pharmaceutical companies.

Copyright © 2011 by Oregon Health & Science University.
Cover of Drug Class Review: Pharmacologic Treatments for Attention Deficit Hyperactivity Disorder
Drug Class Review: Pharmacologic Treatments for Attention Deficit Hyperactivity Disorder: Final Update 4 Report [Internet].
McDonagh MS, Peterson K, Thakurta S, et al.
Portland (OR): Oregon Health & Science University; 2011 Dec.

PubMed Health Blog...

read all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...