NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

DeForge D, Blackmer J, Moher D, et al. Sexuality and Reproductive Health Following Spinal Cord Injury. Rockville (MD): Agency for Healthcare Research and Quality (US); 2004 Nov. (Evidence Reports/Technology Assessments, No. 109.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Sexuality and Reproductive Health Following Spinal Cord Injury

Sexuality and Reproductive Health Following Spinal Cord Injury.

Show details



The UO-EPC's evidence report on sexuality and reproductive health following SCI is based on a systematic review to identify, and synthesize the results from studies addressing two key questions put forth by the Consortium for Spinal Cord Medicine. Together with content experts, UO-EPC staff identified specific issues integral to the review. A Technical Expert Panel (TEP) provided expert guidance as to the conduct of the systematic review. Synthesis tables (i.e., evidence tables) presenting the key study characteristics and results from each included study were developed. Summary tables were derived from the synthesis tables. The methodological quality of the included studies was appraised, and individual study results were summarized.

Key Questions Addressed in This Report

As a result of findings from the phase I feasibility study, the comprehensive report will focus on two questions and their sub-questions. Question 1 focuses on issues related to fertility, pregnancy rates, and live births in persons with SCI. Question 2 focuses on issues related to male impotence post SCI.

  1. Reproductive health: What is the current fertility rate for men and women after SCI?
    • Are fertility rates changed by freezing a new patient's sperm?
    • Are there better fertility rates using electroejaculation or vibration? Does order of method influence outcome?
    • To improve fertility rates, when should invasive techniques such as testicular biopsy or aspiration or ICSI be pursued?
    • Are there pregnancy complications and prospective obstetric management issues for SCI females?
  2. Male sexuality: How has the availability of Viagra® and other remediation affected sexual function, frequency of activity, and adjustment after SCI?
    • Is Viagra® really more benign than intracavernous injections?
    • How does the morbidity of prostaglandin injections compare to the older (less expensive) papaverine?
    • What is the morbidity of vacuum tumescence devices?
    • What indications, if any, remain for implantable penile prosthetic devices?

Study Identification

Search Strategy

A search strategy was developed and tested in Medline (Search Strategy 1, Appendix A), and modified as necessary for other databases (Search Strategy 2, Appendix A). The strategy was based on a preliminary strategy proposed by UO-EPC in a feasibility task order, and was modified in consultation with three members of the review team (DD, JB and VC). The strategy was designed to be highly sensitive and was not restricted by study design, language of publication or publication status. Some of the databases searched were nominated by AHRQ in the work assignment, other databases were selected to provide more complete coverage of key journals nominated by the reviewers; for instance, both SocioFile and PsycInfo provide much more complete indexing coverage of the key journal Sexuality and Disability then does Medline, and so these databases were included.

The databases searched were Medline (1966- June Week 1 2003), Premedline (June 13 2003) and CINAHL (1975 to June Week 1 2003) using Search Strategy 1, and Cochrane Central Register of Controlled Trials, (1st Quarter, 2003), SocioFile (1974 to June 2003) and PsycInfo (1887 to June Week 1 2003) using Search Strategy 2.

Following the suggestions of the technical expert panel the proceedings of the following associations were searched for the years 1997 and 2002 (inclusive): American Urological Association, International Society of Sexual and Impotence Research, International Society for the Study of Woman's Sexual Health American Paraplegia Association, American Association of Spinal Cord Injury Nurses, American Association of Spinal Cord Injury Psychologists and Social Workers, American Association of Sex Educators, Counselors and Therapists, American Spinal Injury Association, American Academy of Physical Medicine and Rehabilitation, and American Congress of Rehabilitation Medicine.

At the suggestions of the technical expert panel and in addition to Eli Lilly Canada Inc. (producer of Cialis) and Bayer Group (producer of Levitra), the following manufacturers were also contacted: Unimed Pharmaceuticals Inc., Mentor Corporation, Vivus Inc., Timm Medical Technologies, Schering-Plough Corporation, Pfizer, Sabex 2002 Inc., and Novartis Pharmaceuticals.

The search strategy was identical to that used in the UO-EPC phase 1 feasibility study.

Eligibility Criteria

Published and unpublished studies, reported in English, involving any research design (e.g., randomized controlled trials [RCTs]), language of publication, and enrolling both male and female, adult and adolescent populations with SCI, were eligible for inclusion if each also met the criteria outlined in Table 2.

Table 2. Inclusion criteria.


Table 2. Inclusion criteria.

Study Selection Process

All results of searches for evidence were provided to reviewers for screening against eligibility (inclusion/exclusion) criteria. As an extension of the phase I feasibility study, two reviewers were employed at the relevance assessment phase of the evidence review. Two levels of screening for relevance were used, with the first level directed at bibliographic records during phase I, the feasibility study (i.e., title, authors, key words, abstract), and the second level focused on those “full report” articles retrieved based on the results of the first level of screening.

Screenings for relevance, assessments of study quality, and data abstraction were completed using the UO-EPC's review management Internet-based software which resides on a secure website. In the case of relevance assessment, the software simultaneously presents the bibliographic record to be screened and the eligibility questions with which to do so.

Following a calibration exercise which involved screening ten sample records using an electronic form developed and tested especially for this review (Appendix B), two reviewers independently broad screened the title, abstract, and key words from each bibliographic record for relevance by liberally applying the eligibility criteria. The record was retained if it appeared to contain pertinent study information. If the reviewers did not agree in finding at least one unequivocal reason for excluding it, it was entered into the next phase of the review. The reasons for exclusion were noted using a modified QUOROM format (Appendix C).28 The screening process also identified which of the two questions the record addressed.

Reports were not masked given the equivocal evidence regarding the benefits of this practice.29, 30 To be considered relevant at this second level of screening, all eligibility criteria had to be met. Disagreements were resolved by forced consensus and, if necessary, third party intervention. Excluded studies were noted as to the reason for their ineligibility (see List of Excluded Studies at the end of the report).

Data Abstraction

Following a calibration exercise involving two studies, two reviewers independently abstracted the contents of each included study using an electronic Data Abstraction form developed especially for this review (Appendix D). Once a reviewer completed their work, they then checked all of the data abstracted by their counterpart. Data abstracted included the characteristics of the:

  • report (e.g., publication status, language of publication, year of publication);
  • study (e.g., sample size; research design; number of arms);
  • population (e.g., age; percent males; diagnosis description);
  • intervention/exposure (e.g., Viagra® for sexual function; testicular biopsy for fertility rates);
  • withdrawals and dropouts.

Summarizing the Evidence


The evidence is presented three ways. Evidence tables in the appendices offer a detailed description of the included studies (e.g., study design, population characteristics, intervention/exposure characteristics), with a study represented only once. The tables are organized by research question and design (e.g., RCTs with male sexuality interventions; observational studies examining male sexuality interventions; observational studies examining fertility rates; etc.).

Question-specific summary tables embedded in the text report each study in abbreviated fashion, highlighting some key characteristics, such as comparators and sample size. This allows readers to compare all studies addressing a given question. A study can appear in more than one summary table given that it can address more than one research question.

Study Quality

Evidence reports include studies of variable methodological quality. Differences in quality across and within study designs may indicate that the results of some studies are more biased (i.e., systematic error) then others. Systematic reviewers need to take this information into consideration to reduce or avoid bias whenever possible. There is considerable evidence that low-quality reports, compared with higher quality ones, can introduce bias into the estimates of an intervention's effectiveness.31 In this report, study quality was assessed through examination of each individual report. No attempt was made to contact the authors of any report. Quality was defined as the confidence that the study's design, conduct, analysis, and presentation has minimized or avoided biases in any comparisons.32 Several approaches exist to assess quality: components, checklists and scales. For this report, we have elected to use a combination of methods in an effort to ascertain a measure of reported quality across different study designs.

For RCTs the Jadad scale was used (Appendix D). This validated scale includes three items that assess the methods used to generate random assignments, double blinding, and a description of dropouts and withdrawals by intervention group.33 The scoring ranges from one to five, with higher scoring indicating higher quality. In addition, allocation concealment [i.e., keeping the randomization blind until the point of allocating participants to an intervention group] was assessed as adequate, inadequate or unclear (Appendix D).34 An a priori threshold scheme was used for sensitivity analysis: a Jadad total score of ≤ 2 indicates low quality with scores > 2 indicating higher quality; for allocation concealment, adequate = 1, inadequate = 2 and unclear = 3.

Cohort and case-control study reports were assessed using the Newcastle-Ottawa scale (NOS). The NOS is an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. It was developed to assess the quality of nonrandomised studies with its design, content and ease-of-use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results. A “star system” has been developed in which a study is judged on three broad perspectives: the selection of the study groups; the comparability of the groups; and the ascertainment of either the exposure for case-control studies, or the outcome of interest for cohort studies. The goal of this project is to develop an instrument providing an easy and convenient tool for quality assessment of nonrandomised studies to be used in a systematic review.

The inter- and intra-rater reliability of the NOS have been established. The face content validity (i.e., the extent to which the instrument appears reasonable on superficial inspection) of the NOS has been reviewed based on a critical review of the items by several experts in the field who evaluated its clarity and completeness for the specific task of assessing the quality of studies to be used in a meta-analysis. Further, its criterion validity has been established with comparisons to more comprehensive but cumbersome scales. The NOS developers continue to develop appropriate measurement properties for the instruments' development.27 Quality assessments of non-comparative case series reports were assessed using a 19-item instrument adapted from Opthalmology (Appendix D).2 We did not conduct any sensitivity analysis of quality assessments on the observational studies, as there is little by way of guidance to suggest what a poor quality study's score would be based on for these assessment instruments.

Qualitative Data Synthesis

A qualitative synthesis was completed for all studies included in the evidence report. A description is provided of the progress of each citation through the review process, and includes information pertaining to each report, such as their sample size. The qualitative synthesis was performed on a question-specific basis, with studies grouped according to research design (e.g., RCTs, observational studies). Each synthesis includes a narrative summary of the key defining features of the study report, if stated, (e.g., a priori description of inclusion/exclusion criteria), population (e.g., diagnosis-related), intervention/exposure (e.g., use of Viagra®), outcomes, study quality, applicability, and individual study results. A brief study-by-study overview typically precedes a qualitative synthesis.

Quantitative Data Synthesis

For several of the questions investigated in this evidence report, quantitative data synthesis was deemed appropriate. However, most of the studies were non-comparative case series and outcomes were in the form of single proportions (e.g. proportion of couples achieving at least one pregnancy). Current meta-analytic methodology generally focuses on data from studies that include a control group, such as randomized controlled trials. From a meta-analytic perspective, one of the strengths of studies that include control groups is that even if there is some degree of heterogeneity in characteristics such as population or intervention across studies, there may be little statistical heterogeneity in the contrast between outcomes in the treatment and control groups across studies. This protection against heterogeneity is not available in studies without a control group. Judicious selection of comparable studies for inclusion in a meta-analysis of single proportions therefore becomes especially crucial. Random effects techniques for pooling results attempt to adjust for the presence of statistical heterogeneity, but necessarily provide weaker inferences, and do not obviate the need for careful investigation of sources of statistical heterogeneity.

In the present work, heterogeneity of single proportions was assessed using Pearson's chi-square test. P-values less than 0.10 were taken to indicate statistically significant heterogeneity. Forest plots were constructed using Wilson score confidence intervals around individual study proportions.35 Pooled estimates and their confidence intervals were obtained using the random effects estimator of Laird & Mosteller.36

PubReader format: click here to try


  • PubReader
  • Print View
  • Cite this Page

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...