NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Hardy M, Coulter I, Morton SC, et al. S-Adenosyl-L-Methionine for Treatment of Depression, Osteoarthritis, and Liver Disease. Rockville (MD): Agency for Healthcare Research and Quality (US); 2002 Oct. (Evidence Reports/Technology Assessments, No. 64.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of S-Adenosyl-L-Methionine for Treatment of Depression, Osteoarthritis, and Liver Disease

S-Adenosyl-L-Methionine for Treatment of Depression, Osteoarthritis, and Liver Disease.

Show details

2Methodology

We synthesized evidence from the scientific literature on the effectiveness of SAMe using the evidence review and synthesis methods of the Southern California Evidence-Based Practice Center (SCEPC). Established by the AHRQ, the center conducts systematic reviews and technology assessments of all aspects of health care; performs research on improving the methods of synthesizing the scientific evidence, developing evidence reports, and conducting technology assessments; and provides technical assistance to other organizations in their efforts to translate evidence reports and technology assessments into guidelines, performance measures, and other quality-improvement tools.

Project staff collaborated with NCCAM, the Task Order Officer at AHRQ, and technical experts representing disciplines related to the intervention topic, conditions studied, and/or methods used.

Scope Of Work

Our literature review process consisted of the following steps:

  • Establish criteria for inclusion of articles in review;
  • Identify sources of evidence in the scientific literature;
  • Identify potential evidence with attention to controlled clinical trials using SAMe;
  • Evaluate potential evidence for methodological quality and relevance;
  • Extract data from studies meeting methodological and clinical criteria;
  • Assess strategies for completeness;
  • Synthesize the results;
  • Perform further statistical analysis on selected studies;
  • Perform pooled analysis where appropriate;
  • Submit the results to technical experts for peer review;
  • Incorporate reviewers' comments into a final report for submission to AHRQ.

Objectives

Based on a discussion with the Task Order Office for AHRQ, the Director of NCCAM, Co-Directors of SCEPC, and project staff, we selected as the focus for this report the use of SAM-e to treat depression, osteoarthritis (OA), and chronic liver disease.

The report was guided by the following research questions:

  • What kinds and numbers of study reports were available that presented research on the use of SAMe for the three conditions identified?
  • What types of outcomes were measured for the identified conditions?
  • What languages, other than English, are predominant in publications? Are the non-English language publications readily accessible?
  • What is the methodological quality of the studies identified?
  • Can the results of the various studies be pooled to allow a risk ratio and an effect size to be calculated for SAMe?

The decision to focus on the efficacy of the chemical, SAMe, instead of another supplement or a variety of supplements, was made in discussions with the funding agency. The funding agency identified several conditions of interest to it for which SAMe has been reported to have a therapeutic effect: OA, depression, and liver disease.

Literature Search Design

Technical Expert Panel

The Evidence-Based Practice Center was advised on CAM topics by a group of technical experts regarding the search and inclusion criteria and appropriate analyses. The technical experts represented diverse disciplines including acupuncture, Ayurvedic medicine, chiropractic, dentistry, general internal medicine, gastroenterology, rheumatology, integrative medicine (the practice of combining alternative and conventional medicine), neurophysiology, pharmacology, psychiatry, psychoneuroimmunology, psychology, sociology, botanical medicine, and traditional Chinese medicine. The technical experts assisted the project in several ways. They aided us in identifying potential topics for review, appropriate sources of relevant literature, and technical experts for peer review; assessing our search strategies; and addressing specific questions in their areas of expertise. In addition, one of our expert panel members assisted in reviewing literature in languages other than English, and his contribution is listed in the Acknowledgments appendix. The Acknowledgment appendix also lists members of the expert panel along with their affiliations.

Identification Of Literature Sources

Potential evidence for the report came from three areas: on-line library databases, the reference lists of all relevant articles, and other sources such as identified experts and the personal libraries of project staff and their associates. The reference librarian at RAND identified traditional biomedical databases as well as databases that focus on alternative and complementary medicine (Tables 1 and 2).

Table 1. Biomedical databases searched.

Table

Table 1. Biomedical databases searched.

Table 2. Pharmaceutical and other biomedical databases searched.

Table

Table 2. Pharmaceutical and other biomedical databases searched.

We conducted five searches, and the full search strategies are displayed in Appendix A. Limiting the output to human studies, we searched using the terms SAMe and its many pharmacological synonyms (see Table 3); the three focus disease states (arthritis, liver disorders, and depression); and study design or article type (randomized controlled trials, clinical controlled trials, meta-analyses, and systematic reviews).

Table 3. Additional search terms for SAMe.

Table

Table 3. Additional search terms for SAMe.

Two reviewers independently evaluated the list of 1562 titles that the on-line database searches generated. The reviewers read the lists of titles and accepted articles that satisfied the following criteria:

  • Focused on SAMe for depression, osteoarthritis, or chronic liver disease;
  • Presented any historical or descriptive background information about SAMe and its use;
  • Presented a meta-analysis or systematic review of SAMe and one of the three disease states identified.

Articles that fit the following criteria were rejected:

  • Focused on a disease state that was not one of the three selected;
  • Contained animal or in vitro data unless human clinical trial information or significant background information was also included.

Language was not considered a barrier to inclusion.

We obtained 62 articles that we identified from the reference lists of the articles previously ordered and from the professional libraries of our project staff and their colleagues.

From this stage of the screening process, the reviewers requested a total of 294 unique articles, of which we were able to obtain 285. Appendix B includes a list of articles that we could not obtain. Selected articles were referred to clinical research reviewers for further evaluation and for possible inclusion in the evidence synthesis.

Using Microsoft Access database software, we tracked requests for articles, and we used Pro-Cite as a link to read the citations into the Access database as well as to manage our reference list. We also used the database to produce and store our data collection instruments. Table 4 summarizes the search strategy shown in Appendix A. The details of the screening process are discussed in the next section.

Table 4. Summary of search strategy.

Table

Table 4. Summary of search strategy.

Evaluation Of Evidence

Two physicians, each trained in the critical analysis of scientific literature, independently reviewed each study, abstracted data, and resolved disagreement by consensus. Among the 285 articles accepted after the initial screening, they accepted 99 articles for further study, based on the data collected using the screening form. These 99 articles met the following criteria and were therefore included in the synthesis of evidence:

  • Focused on SAMe and depression, osteoarthritis, or liver disease;
  • Presented research on human subjects;
  • Reported the results of a clinical trial;
  • Reported clinical outcomes.

Three articles (Barberi and Pusateri, 1978; Delle Chiaie and Boissard, 1997; and DiPadova, Giudici and Boissard, 2000) described two different studies,1 so a total of 102 unique studies were referred for detailed review. We created a one-page data collection instrument that served as a screening form for this process. Appendix C contains a copy of this screening instrument.

Of 102 studies included in the analysis, 75 appeared in English-language publications, 22 appeared in Italian-language publications, four appeared in Spanish-language publications, and one appeared in a Chinese-language publication. Some French-language abstracts were also screened, but no articles meeting the acceptance criteria were found. All articles, both English and non-English, underwent dual review by trained reviewers who were fluent in the language.

Extraction Of Data

Detailed information from each of the 102 studies was collected on a specialized data collection instrument (the Quality Review Form) designed for this purpose. This Quality Review Form (Appendix D) was developed in consultation with our technical experts. We included questions about the study design; the technical quality of the study; the number and characteristics of the patients; patient recruitment information; details on the intervention, such as the dose, route of administration, frequency, and duration; the types of outcome measures; and the time between intervention and outcome measurement. Two trained reviewers, working independently, extracted data in duplicate and resolved disagreements by consensus. A senior physician researcher on the project staff resolved any disagreements not resolved by consensus.

To evaluate the quality of the studies, we collected information on the study design, appropriateness of randomization, blinding, description of withdrawals and dropouts (Jadad, Moore, Carroll, et al., 1996), and concealment of allocation (Schulz, 1995). A score for quality was calculated for each study using a system developed by Jadad (Jadad, Moore, Carroll, et al., 1996). See Appendix E for further details (Moher, Pham, Jones, et al., 1998).

Of the 102 studies that were accepted for detailed abstraction, 47 focused on depression, 14 focused on OA, and 41 focused on liver disease. Figure 1 diagrams the flow of articles from the point at which they entered our database through the article ordering, screening, quality review, and statistical analysis stages. All articles that went on for abstraction were examined for inclusion in the data synthesis.

Figure 1. SAMe Literature Search and Review Strategy.

Figure

Figure 1. SAMe Literature Search and Review Strategy.

Selection Of Studies For Meta-Analysis

Study Inclusion

In selecting studies for the meta-analysis, we considered all 102 studies for which a Quality Review Form (QRF) was prepared. For both depression and osteoarthritis, the available studies were judged to be sufficiently clinically homogeneous to support a pooled analysis. On the other hand, the studies on liver disease encompassed a much wider variety of clinical conditions. The largest group of studies identified within the area of liver disease focused on intrahepatic cholestasis. Within that group, we made the decision to stratify the studies into those on cholestasis of pregnancy and those for which intrahepatic cholestasis may represent a “final common pathway” for a variety of mainly chronic liver disease conditions. In addition, a number of the pregnancy studies had an active therapy comparison arm (ursodeoxycholic acid), which was not the case for the chronic liver disease studies.

For each of the four conditions (depression, osteoarthritis, cholestasis of pregnancy, and intrahepatic cholestasis associated with chronic liver disease), we chose the clinically relevant outcomes and clinically comparable follow-up times that were reported most often. Some of the outcomes were continuous measures, and some were dichotomous variables. For studies with a crossover design, we extracted data only from the first treatment phase, prior to the crossover. If such data were not available, the study was excluded from the meta-analysis.

We synthesized effect sizes for continuous outcomes and risk ratios for dichotomous outcomes. In order for a study to be included in the analysis, the original report had to contain sufficient statistical information for the calculation of an effect size or risk ratio as appropriate for the relevant outcome, and the report could not contain duplicate data. The definitions of duplicate data and what constituted sufficient statistical information follow.

Duplicate Publications, Studies And Data

We distinguished three different types of data duplication. First, multiple citations of the same article were removed at the title screening stage of the project, so all publications that reached the synthesis stage contained reports of unique studies. Second, some publications were based on the same study population and experiment as others, yet each reported different outcome data. If both outcomes were relevant for the meta-analysis, we included both articles in our evidence table and note the link between the studies. Third, in some instances, several publications reported on the same study population and intervention and presented the same outcome data. In these cases, we picked the most informative of the duplicates; for example, if one publication was a conference abstract with preliminary data, and the second was a full journal article, we chose the latter. The publications dropped for duplicate data do not appear in the evidence table but are discussed in the text of the results section within each condition.

Stratification Of Interventions

Potentially significant variability was noted in the interventions studied. Routes of administration for SAMe included both parenteral and oral. Dosages of both SAMe and other comparison drugs varied as well. In an effort to assess the comparability of the interventions tested, we stratified interventions by route of administration and level of dose. For SAMe, the dosage stratifications (i.e. what constituted a high or low dose) were based on expert opinion and clinical experience of the research team members. Non-steroidal anti-inflammatory drugs (NSAIDs) were categorized using a strategy previously developed by the SCEPC (Ofman, in press 2001). This stratification system was considered in the subsequent analysis of the selected studies.

Effect Size Calculations For Continuous Outcomes

For each study, we calculated effect sizes for the SAMe arm of interest and the other arms in each study that were considered relevant. Generally, each study included one comparison between a single SAMe arm and a placebo or treatment (e.g., NSAIDS) arm. Some studies contained more than one SAMe arm (representing different doses) or contained more than one other treatment arm and thus contributed more than one effect size to be considered for analysis. Double-counting patients became a concern if a study contributed more than one effect size and patients were included more than once in those effect sizes. For example, if a study had one placebo arm and two SAMe arms, it contributed two effect sizes, both based on the same placebo patients. In such cases, we used clinical relevance as our criterion, and included in the meta-analysis the effect size for the SAMe arm that was most comparable in terms of dose to the SAMe arms in the other studies in the analysis.

For each study, the means and standard deviations of each outcome at the designated follow-up time for each relevant arm were extracted if available. If studies did not report a follow-up mean, or if a follow-up mean could not be calculated from the given data, the study was excluded from the meta-analysis. If a study did not report a standard deviation, or if a standard deviation could not be calculated from the given data, we imputed the standard deviation by using those studies and arms that did report a standard deviation and weighting all arms equally in the imputed value calculation, or we assumed that the standard deviation was 0.25 of the theoretical range for the specific measure in the study. For example, if a study measured pain on a 0–100 scale, we assumed the standard deviation was 25. For each pair of arms, an unbiased estimate (Hedges and Olkin, 1985) of Hedges' g effect size (Rosenthal, 1991) and a 95 percent confidence interval were calculated. A negative effect size indicates that SAMe is associated with a decrease in the outcome at follow-up as compared with the comparison arm, e.g., placebo. For example, in the OA meta-analysis, the outcome was pain, so a negative effect size indicated that SAMe was associated with a decrease in pain at follow-up as compared with placebo.

Risk Ratio Calculations For Dichotomous Outcomes

We estimated log risk ratios and constructed 95 percent confidence intervals in the logarithmic scale, for variance-stabilization reasons (Ioannidis, Cappelleri, Lau, et al., 1995). We then back-transformed to the risk ratio scale. For studies that had zero outcomes in either the SAMe or comparison arms or had all patients in either arm with the outcome, we performed a continuity correction by adding 0.5 to all cells in the two-by-two table of arm by outcome.

A risk ratio of less than 1.0 indicates that the chance of the outcome in the comparison arm is smaller than that in the SAMe arm. For example, in the depression meta-analysis, one of the dichotomous outcomes was “greater than 25 percent improvement on the Hamilton Depression Scale or not.” A risk ratio of 0.5 indicates that half as many people improved in the comparison arm as in the SAMe arm, or, analogously, twice as many people improved in the SAMe arm as in the comparison arm.

Meta-Analysis

When appropriate, we estimated a pooled random-effects estimate (DerSimonian and Laird, 1986) by combining either effect sizes or risk ratios, depending on the outcome. We also report the Chi-squared test of heterogeneity p-value (Hedges and Olkin, 1985). When relevant, we conducted sensitivity analyses on subgroups of studies to determine the robustness of our conclusions.

We assessed the possibility of publication bias by evaluating a funnel plot of effect sizes or log risk ratios for asymmetry, which results from the nonpublication of small, negative studies. If no publication bias exists, the full symmetric distribution of study effect sizes will be observed in the funnel plot. If bias exists, the distribution will be asymmetric or skewed due to the fact that some studies are missing because they have not been published. Because graphical evaluation can be subjective, we also conducted an adjusted rank correlation test (Begg and Mazumdar, 1994) and a regression asymmetry test (Egger, Davey Smith, Schneider, et al., 1997) as formal statistical tests for publication bias. The correlation approach tests whether the correlation between the effect sizes and their variances is significant, and the regression approach tests whether the intercept of a regression of the effects sizes on their precision differs from zero, i.e., both formally test for asymmetry in the funnel plot. We conducted all analyses and drew all graphs using the statistical package Stata (1999). The results of our analysis are presented in the following section.

Footnotes

1

For our purposes, a study is defined as having a unique patient population.

Views

  • PubReader
  • Print View
  • Cite this Page

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...