Appendix FCriteria for Assessing the Quality of Individual Studies

Publication Details

Assessment of Internal Validity

To assess the internal validity of individual studies, the EPC adopted criteria for assessing the internal validity of individual studies from the U.S. Preventive Services Task Force and the NHS Centre for Reviews and Dissemination. To assess the quality of observational studies, we used criteria outlined by Deeks et al., 2003.1

For Controlled Trials

Assessment of Internal Validity

  1. Was the assignment to the treatment groups really random?
    • Adequate approaches to sequence generation:
      • Computer-generated random numbers
      • Random numbers tables
    • Inferior approaches to sequence generation:
      • Use of alteration, case record numbers, birth dates or week days
    • Not reported
  2. Was the treatment allocation concealed?
    • Adequate approaches to concealment of randomization:
      • Centralized or pharmacy-controlled randomization
      • Serially-numbered identical containers
      • On-site computer-based system with a randomization sequence that is not readable until allocation
      • Other approaches sequence to clinicians and patients
    • Inferior approaches to concealment of randomization:
      • Use of alteration, case record numbers, birth dates or week days
      • Open random numbers lists
      • Serially numbered envelopes (even sealed opaque envelopes can be subject to manipulation)
    • Not reported
  3. Were the groups similar at baseline in terms of prognostic factors?
  4. Were the eligibility criteria specified?
  5. Were outcome assessors blinded to the treatment allocation?
  6. Was the care provider blinded?
  7. Was the patient kept unaware of the treatment received?
  8. Did the article include an intention-to-treat analysis or provide the data needed to calculate it (i.e., number assigned to each group, number of subjects who finished in each group, and their results)?
  9. Did the study maintain comparable groups?
  10. Did the article report attrition, crossovers, adherence, and contamination?
  11. Is there important differential loss to followup or overall high loss to followup? (Give numbers in each group.)

Assessment of External Validity (Generalizability)

  1. How similar is the population to the population to whom the intervention would be applied?
  2. How many patients were recruited?
  3. What were the exclusion criteria for recruitment? (Give numbers excluded at each step.)
  4. What was the funding source and role of funder in the study?
  5. Did the control group receive the standard of care?
  6. What was the length of followup? (Give numbers at each stage of attrition.)

For Observational Studies

Assessment of Internal Validity

  1. Were both groups selected from the same source population?
  2. Did both groups have the same risk of having the outcome of interest at baseline?
  3. Were subjects in both groups recruited over the same time period?
  4. Was there any obvious selection bias?
  5. Were ascertainment methods adequate and equally applied to both groups?
  6. Was an attempt made to blind the outcome assessors?
  7. Was the time of followup equal in both groups?
  8. Was overall attrition high (≥ 20%)?
  9. Was differential attrition high (≥ 15%)?
  10. Did the statistical analysis consider potential confounders or adjust for different lengths of followup?
  11. Was the length of followup adequate to assess the outcome of interest?

For Systematic Reviews and Meta-analyses

  1. Is the review based on a focused question of interest?
  2. Did the search strategy employ a comprehensive, systematic, literature search?
  3. Are eligibility criteria for studies clearly described?
  4. Did at least 2 persons independently review studies?
  5. Did authors use a standard method of critical appraisal before including studies?
  6. Was publication bias assessed?
  7. Was heterogeneity assessed and addressed?
  8. Did statistical analysis maintain trials as the unit of analysis?



Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, et al. Evaluating non-randomised intervention studies. Health Technol Assess. 2003;7(27):iii–x, 1–173.