NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Haney EM, O'Neil ME, Carson S, et al. Suicide Risk Factors and Risk Assessment Tools: A Systematic Review [Internet]. Washington (DC): Department of Veterans Affairs (US); 2012 Mar.

Cover of Suicide Risk Factors and Risk Assessment Tools: A Systematic Review

Suicide Risk Factors and Risk Assessment Tools: A Systematic Review [Internet].

Show details


CriteriaOperationalization of Criteria a
1. Were the search methods reported?
Were the search methods used to find evidence (original research) on the primary questions stated?
“Yes” if the review states the databases used, date of most recent searches, and some mention of search terms.
The purpose of this index is to evaluate the scientific quality (i.e., adherence to scientific principles) of research overviews (review articles) published in the medical literature. It is not intended to measure literary quality, importance, relevance, originality, or other attributes of overviews.
The index is for assessing overviews of primary (“original”) research on pragmatic questions regarding causation, diagnosis, prognosis, therapy, or prevention. A research overview is a survey of research. The same principles that apply to epidemiological surveys apply to overviews: a question must be clearly specified, a target population identified and accessed; appropriate information obtained from that population in an unbiased fashion; and conclusions derived, sometimes with the help of formal statistical analysis, as is done in “meta-analyses”. The fundamental difference between overviews and epidemiological studies is the unit of analysis, not the scientific issues that the questions in this index address.
Since most published overviews do not include a methods section, it is difficult to answer some of the questions in the index. Base your answers, as much as possible, on information provided in the overview. If the methods that were used are reported incompletely relative to a specific question, score it as “can't tell,” unless there is information in the overview to suggest either the criterion was or was not met.
2. Was the search comprehensive?
Was the search for evidence reasonably comprehensive?
“Yes” if the review searches at least 2 databases and looks at other sources (such as reference lists, hand searches, and queries experts).
3. Were the inclusion criteria reported?
Were the criteria used for deciding which studies to include in the overview reported?
4. Was selection bias avoided?
Was bias in the selection of studies avoided?
“Yes” if the review reports how many studies were identified by searches, numbers excluded, and gives appropriate reasons for excluding them (usually because of pre-defined inclusion/exclusion criteria).
5. Were the validity criteria reported?
Were the criteria used for assessing the validity of the included studies reported?
6. Was validity assessed appropriately?
Was the validity of all the studies referred to in the text assessed using appropriate criteria (either in selecting studies for inclusion or in analyzing the studies that are cited)?
“Yes” if the review reports validity assessment and did some type of analysis with it (e.g., sensitivity analysis of results according to quality ratings, excluded low-quality studies, etc.)
7. Were the methods used to combine studies reported?
Were the methods used to combine the findings of the relevant studies (to reach a conclusion) reported?
“Yes” for studies that did qualitative analysis if there is some mention that quantitative analysis was not possible and reasons that it could not be done, or if ‘best evidence’ or some other grading of evidence scheme used.
For Question 8, if no attempt has been made to combine findings, and no statement is made regarding the inappropriateness of combining findings, check “No” if a summary (general) estimate is given anywhere in the abstract, the discussion, or the summary section of the paper; and if it is not reported how that estimate was derived, mark “No” even if there is a statement regarding the limitations of combining the findings of the studies reviewed. If in doubt, mark “Can't tell”.
For an overview to be scored as “Yes” in Question 9, data (not just citations) must be reported that support the main conclusions regarding the primary question(s) that the overview addresses.

The score for Question 10, the overall scientific quality, should be based on your answers to the first nine questions. The following guidelines can be used to assist with deriving a summary score: If the “Can't tell” option is used one or more times on the preceding questions, a review is likely to have minor flaws at best and it is difficult to rule out major flaws (i.e., a score of 4 or lower). If the “No” option is used on Question 2, 4, 6 or 8, the review is likely to have major flaws (i.e., a score of 3 or less, depending on the number and degree of the flaws).
8. Were the findings combined appropriately?
Were the findings of the relevant studies combined appropriately relative to the primary question the overview addresses?
“Yes” if the review performs a test for heterogeneity before pooling, does appropriate subgroup testing, appropriate sensitivity analysis, or other such analysis.
9. Were the conclusions supported by the reported data?
Were the conclusions made by the author(s) supported by the data and/or analysis reported in the overview?
10. What was the overall scientific quality of the overview?
How would you rate the scientific quality of this overview?
ScoringEach Question is scored as Yes, Partially/Unclear or No
Extensive FlawsMajor FlawsMinor FlawsMinimal Flaws

Table created using information from Oxman & Guyatt, J Clin Epidemiol. 1991;44(11):1271-8 and Furlan, Clarke, et al., Spine. 2001 Apr 1;26(7):E155-62.


  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.4M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...