NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Helfand M, Mahon S, Eden K. Screening for Skin Cancer [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2001 Apr. (Systematic Evidence Reviews, No. 2.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Screening for Skin Cancer

Screening for Skin Cancer [Internet].

Show details

2Methods

Literature Search Strategy

To find relevant articles on screening for skin cancer, we searched the MEDLINE database for papers published from January 1994 to June 1999, using search terms for screening, physical examination, morbidity, and skin neoplasms. For information on accuracy of screening tests, we used the search term “sensitivity and specificity.” Additional search terms were added to locate articles for background on skin cancer morbidity and mortality and on epidemiology. The search was updated monthly during the course of the project. We also used reference lists and expert recommendations to locate additional articles published after 1994. (See Appendix 1: Strategy for Skin Cancer Search Terms.)

Two reviewers independently reviewed a subset of 500 abstracts. Once consistency was established, 1 reviewer reviewed the remainder. We included studies if they contained data on yield of screening, screening tests, risk factors, risk assessment, effectiveness of early detection, or cost effectiveness (CE). Of 54 included studies, 5 contained data on accuracy of screening tests, 24 contained data on yield of screening, 7 contained data on stage or thickness of lesions found through screening, 11 addressed risk assessment, and 7 addressed the effectiveness of early detection (some studies addressed more than one topic). (See Appendix 2: Inclusion Criteria for Evidence Tables.) We retrieved the full text of these articles and abstracted the data as described below. In addition, we retrieved the full text of 47 studies of various risk factors for skin cancer. We read these articles but did not systematically abstract them.

We identified the most important studies from before 1994 from the Guide to Clinical Preventive Services, second edition 27 and from high-quality reviews published in 1994 and in 1996; from reference lists of recent studies; and from experts.

Literature Synthesis and Preparation of Systematic Evidence Review

We abstracted the following descriptive information from full-text, published studies of screening and recorded it in an electronic database: study type (mass screening, population based, casefinding, other), study design (prospective, case control, retrospective, observational, other), setting (hospital, community, specialty clinic, primary care, other), population (percent white, age), recruitment (volunteers, invitation, random sampling), screening test (total-body skin examination, partial skin examination, lesion-specific examination, other), examiner (dermatologist, primary care physician, other), advertising targeted at high-risk groups or not targeted, reported risk factors of participants, and procedure for referring patients found to have a positive screen.

We also abstracted the number and the probability of the following events from each study: referrals for skin examination; compliance with referral; suspected basal cell cancers, squamous cell cancers, actinic keratoses, and melanoma; confirmed melanoma and melanoma in situ; negative screening examinations; biopsies performed; the persons who had confirmed melanoma, suspected melanoma, or both; and the persons who had confirmed melanoma, the number of all suspicious lesions, or both. When available, the type, stage, or thickness of lesions found through screening was also recorded.

For studies that reported test performance, we also recorded the definition of a suspicious lesion, the gold standard determination of disease, and the number of true-positive, false-positive, true-negative, and false-negative test results. To analyze data from these studies, we defined sensitivity as the proportion of people who had a histologic diagnosis of skin cancer and who had a positive test result—that is, a suspicious lesion on examination. Specificity was defined as the proportion of people who did not have skin cancer and who had no suspicious lesions detected during the skin examination.

The positive predictive value (PV+) was computed in 2 ways to account for noncompliance in studies. We computed the lower bound PV+ (Low PV+) by dividing the number of patients who had confirmed skin cancer by the number of patients who were diagnosed with a suspicious lesion, and we computed the upper bound PV+ (High PV+) by dividing the number of patients who had confirmed skin cancer by the number of patients who had biopsies. If the study provided sufficient detail, we calculated the PV+ of examination for each type of skin cancer. Most studies, however, did not report results in sufficient detail; for these, we combined the results for different types of skin cancer.

We calculated likelihood ratios (LRs) for each study. The LR for a positive test was calculated with the formula

LR = [High PV/(1-High PV)}/{p(cancer)/[1-p(cancer)]},

where p(cancer) is the observed prevalence of disease, estimated as p(cancer) = (number of true positives + number of false negatives)/(number of patients screened). 29

This formula was derived from the odds ratio (OR) form of Bayes' theorem. 30 The advantage of using the OR form is that LR can be computed in studies that reported findings only for patients who had positive tests. The computation is based on the following:

Posttest odds = pretest odds × LR.

In computing the LRs, we used the High PV+, which included only those patients who went on for biopsy. We assumed that the High PV+ was more representative of screening in a primary care setting. These patients would be more likely to follow through with a biopsy than those attending a mass screening.

In studies that did not measure the false-negative rate, we assumed that there were no false-negative results. We calculated the observed prevalence by dividing the number of patients who had true-positive results by the number of screened patients. If there were, in fact, patients in these studies with false-negative results, the observed prevalence was underreported. Since the computation of the LR depends on an accurate measure of prevalence, LRs computed with an underreported prevalence are inflated. We performed a sensitivity analysis to record the effect on the estimate of LR when the number of false negatives and the prevalence were varied over a reasonable range of values.

Views

  • PubReader
  • Print View
  • Cite this Page

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...