As noted above, moving from systematic review to study design considerations starts with the identification of evidence gaps and their prioritization into research needs. At this point, the research needs can be further analyzed and characterized to become “operationalized.” A framework for doing so is presented in “Frameworks for Determining Research Gaps During Systematic Reviews.”5 The process of characterizing gaps may reveal important considerations for study design. The framework has two steps.
The first step is to assess why existing evidence is inadequate, using criteria based on the same criteria that are used for assessing SOE.6
- Insufficient or imprecise information (corresponding to simple lack of studies or the GRADE* concept of precision)
- Biased information (corresponding to bias)
- Inconsistent or unknown consistency results (corresponding to consistency)
- Not the right information (corresponding to directness or applicability)
The second step is to apply PICOTS criteria to the identified evidence gaps, specifying the population of interest, the proposed intervention, the comparator treatment test or policy, the outcomes to be assessed, the timeframe for the study, and the setting of the research. Most research needs can be fully characterized with this framework, although research needs that are primarily methods enhancements (such as development of outcomes measures or a statistical technique) may not be able to be placed in such a schema. Once the research needs have been fully characterized, it is possible to make suggestions about study designs. For example, if the problem is a simple lack of data, then a wide range of trial designs may be able to add to the picture. However if the problem is lack of precision, power may be an important factor in determining what kind of study will be able to answer the question. If bias is the problem, studies that replicate the same methodological problems are unlikely to resolve the question. This may be a problem of poorly designed or executed studies, or it may have to do with the type of study design. For example, existing studies that are mainly observational for a topic may generate concern regarding unmeasured confounding. Inconsistency may be due to unidentified heterogeneity in the population or intervention, or it could be due to lack of consensus on what the study measures should be. In the first case, it will be important to identify possible causes of heterogeneity and incorporate them into the new study design (for example, using inclusion criteria to create homogeneity or through stratification). If the problem is “not the right information,” additional considerations come to mind. If a question cannot be answered because all existing studies only measured surrogate markers, additional studies on surrogate markers are unlikely to move the field forward. Or if studies have only been carried out in academic settings but the intervention will be implemented in primary care, then a focus on use of primary care networks would be useful. Study designs other than trials may be appropriate when methods enhancements are needed or very long-term outcomes need to be assessed in highly generalizable populations.
Throughout this and other FRN documents, we propose common terminology across EPCs and topic areas to aid in communication across disciplines.
The GRADE (Grading of Recommendations Assessment, Development and Evaluation) working group has developed an approach to grading quality of evidence and strength of recommendations in health care. Its Web site is at www
Agency for Healthcare Research and Quality (US), Rockville (MD)
Carey TS, Sanders GD, Viswanathan M, et al. Framework for Considering Study Designs for Future Research Needs [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Mar. (Methods Future Research Needs Reports, No. 8.) From Systematic Review to Study Design Considerations.