NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

McDonagh MS, Selover D, Santa J, et al. Drug Class Review on Agents for Overactive Bladder: Final Report [Internet]. Portland (OR): Oregon Health & Science University; 2005 Dec.

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Drug Class Review on Agents for Overactive Bladder

Drug Class Review on Agents for Overactive Bladder: Final Report [Internet].

Show details

Methods

Literature Search

To identify articles relevant to each key question, we searched the Cochrane Library (2nd Quarter, 2005), MEDLINE (1966-July Week 3 2005), EMBASE (1980-July Week 3 2005), and reference lists of review articles. In electronic searches, we used broad searches, only combining terms for drug names with terms for relevant research designs (see Appendix A for complete search strategy). We have attempted to identify additional studies through searches of reference lists of included studies and reviews, the FDA web site, as well as searching dossiers submitted by pharmaceutical companies for the current review. All citations were imported into an electronic database (EndNote 9.0).

Study Selection

Two reviewers independently assessed studies for inclusion, with disagreements resolved through consensus. We included English-language reports of randomized controlled trials, involving adults with symptoms of urge incontinence, overactive bladder or irritable bladder. Interventions included one of the eight OAB drugs (flavoxate, oxybutynin, tolterodine, trospium, darifenacin, hyoscyamine sulfate, scopolamine or solifenacin) compared with another anticholinergic OAB drug, another OAB drug (i.e., an anticholinergic drug not on the US market), non-drug therapy (i.e., bladder training) or placebo. For adverse effects, we also included observational studies of at least 6 weeks' duration. Outcomes were mean change in number of incontinence episodes per 24 hours, mean change in number of micturitions per 24 hours, mean change in number of pads used per 24 hours, and subjective patient or physician assessments of symptoms (i.e., the severity of problems caused by bladder symptoms, extent of perceived urgency, global evaluation of treatment symptoms, quality of life, and adverse effects, including drug interactions).

To evaluate effectiveness or efficacy we included only controlled clinical trials. The validity of controlled trials depends on how they are designed. Randomized, properly blinded clinical trials are considered the highest level of evidence for assessing efficacy.16–18 Clinical trials that are not randomized or blinded, and those that have other methodological flaws, are less reliable, but are also discussed in our report.

Trials that evaluated one anticholinergic OAB drug against another provided direct evidence of comparative effectiveness and adverse event rates. In theory, trials that compare these drugs to other drugs used to treat OAB or placebos can also provide evidence about efficacy. However, the efficacy of the drugs in different trials can be difficult to interpret because of significant differences in key characteristics of the patient populations. Comparability of results across trials (direct comparisons or indirect comparisons) is difficult due to differing outcomes and different methods with which outcomes are assessed. Such assessments across trials should be done with caution.

To evaluate adverse event rates, we included clinical trials and observational cohort studies. Clinical trials are often not designed to assess adverse events, and may select low-risk patients (in order to minimize dropout rates) or utilize inadequate methodology for assessing adverse events. Observational studies designed to assess adverse event rates may include broader populations, carry out observations over a longer time, utilize higher quality methodologies for assessing adverse events, or examine larger sample sizes.

Data Abstraction

The following data was abstracted from included trials: study design, setting; population characteristics (including sex, age, ethnicity, diagnosis); eligibility and exclusion criteria; interventions (dose and duration); comparisons; numbers screened, eligible, enrolled, and lost to follow-up; method of outcome ascertainment; and results for each outcome. We recorded intention-to-treat results if available and the trial did not report high overall loss to follow-up.

Validity Assessment

We assessed the internal validity (quality) of trials based on the predefined criteria listed in Appendix B. These criteria are based on those developed by the US Preventive Services Task Force and the National Health Service Centre for Reviews and Dissemination (UK).17, 18 We rated the internal validity of each trial based on the methods used for randomization, allocation concealment, and blinding; the similarity of compared groups at baseline; maintenance of comparable groups; adequate reporting of dropouts, attrition, crossover, adherence, and contamination; loss to follow-up; and the use of intention-to-treat analysis. Trials that had a fatal flaw in one or more categories were rated poor quality; trials which met all criteria, were rated good quality; the remainder were rated fair quality. As the "fair quality" category is broad, studies with this rating vary in their strengths and weaknesses: the results of some fair quality studies are likely to be valid, while others are only probably valid. A "poor quality" trial is not valid—the results are at least as likely to reflect flaws in the study design as the true difference between the compared drugs. External validity of trials was assessed based on whether the publication adequately described the study population, how similar patients were to the target population in whom the intervention was to be applied, and whether the treatment received by the control group was reasonably representative of standard practice. We also recorded the funding source and role of the funder.

Appendix B also shows the criteria we used to rate observational studies of adverse events. These criteria reflect aspects of the study design that are particularly important for assessing adverse event rates. We rated observational studies as good quality for adverse event assessment if they adequately met six or more of the seven predefined criteria, fair if they met three to five criteria, and poor if they met two or fewer criteria.

Overall quality ratings for the individual study were based on ratings of the internal and external validity of the trial. A particular randomized trial might receive two different ratings: one for efficacy and another for adverse events. The overall strength of evidence for a particular key question reflects the quality, consistency, and power of the set of studies relevant to the question.

Data Synthesis

In addition to overall discussion of the study findings, meta-analyses were conducted where possible. Forest plots of the standardized effect size for efficacy measures or the risk difference for adverse events are presented where possible to display data comparatively. Forest plots were created using StatsDirect (CamCode, UK) software. Results are reported as differences between the drugs in mean change in number of micturitions or incontinence episodes per day or per week. Differences in adverse event rates and withdrawals due to adverse events are expressed as the "percent risk difference." This is the difference between the proportions healed in two groups of patients at a given time-point (e.g., at 4 weeks, 80% in group A and 75% in group B is a 5% risk difference). As a measure of the variance around these estimates, the 95% confidence interval (CI) is also reported. If the 95% CI includes 0, then the difference is not statistically significant. Risk differences are plotted on forest plots, always presenting the difference of the first drug minus the second named drug. The size of the box indicating the point estimate is determined by the variance, such that larger studies generally have larger boxes relative to smaller studies.

Copyright & 2005, Oregon Health & Science University, Portland, Oregon.
Bookshelf ID: NBK10419
PubReader format: click here to try

Views

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...