NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Robinson KA, Saldanha IJ, Mckoy NA. Frameworks for Determining Research Gaps During Systematic Reviews [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2011 Jun. (Methods Future Research Needs Reports, No. 2.)
We utilized multiple resources and perspectives including literature review, contact with other EPCs and organizations involved with evidence synthesis, and consultation with experts at our institution to develop a framework for the identification and characterization of research gaps. This framework involves two main components – identifying explicitly why the research gap exists and characterizing the research gap using widely accepted key elements. This framework facilitates the use of a systematic method to identify research gaps.
Strengths
There are important strengths to the process we used to achieve our objective. First, our process utilized multiple resources and perspectives. These included a focused literature review and consultation with twelve other EPCs, thirty-seven organizations from around the world which are involved with evidence synthesis, and two technical experts from our institution. Second, we pilot tested the use of the framework on two randomly selected AHRQ evidence reports. This pilot test did not identify any major problems with the framework but did identify the need for consistency and prior decision making on the part of investigative team members.
There are several strengths to the framework itself. First, it is based on widely accepted key elements (PICOS) of a well-designed research question. AHRQ also recommends that EPCs use the PICO elements during the topic refinement process. Second, the use of these elements will potentially make the process of identification of research gaps more systematic and therefore useful. Third, for each underlying reason for research gap we have provided the corresponding domain/element in three common evidence grading systems (the EPC SOE system, the GRADE system, and the USPSTF grading system). We anticipate that this will enhance the use of this framework by leveraging work already being completed. Finally, in addition to indicating where the current evidence falls short, the framework also indicates why the evidence falls short (reasons for existence of research gaps). Knowing where the gaps are and the reason(s) underlying their existence could help in the design of the appropriate research to fill them.
The worksheet is simple to use and facilitates the presentation of research gaps. It is transparent and reproducible. A proposed format for presenting research gaps is provided below.
Proposed Format for Presenting Research Gaps and Research Questions
We did not find consistency in how research gaps were presented during our audit of the evidence reports. Some reports presented these by embedding them in text while others used bullet-point lists, numbered lists, or presented as tables.
We propose that while writing the future research needs sections of evidence reports, investigative teams provide adequate details of research gaps and translate them into research questions. While translating gaps into research questions, all relevant PICOS elements should be incorporated. This would ensure that such questions are stand-alone and can be more effectively used by those designing research agendas. We propose that EPCs use the following format for presenting research gaps in evidence reports:
- Key Question Number and Key Question Topic
- Research Gap Number
- –
Reason for Gap
- –
Population (P)
- –
Intervention (I)
- –
Comparison (C)
- –
Outcomes (O)
- –
Setting (S)
- Research Question.
Evidence reports often identify research gaps which do not relate to any specific key question. Such research gaps could be presented at the end of the future research section. We suggest use of the same format as above, but Key Question Number and Key Question Topic would be replaced by “Other Research Gaps”.
An example of presenting two research gaps and translated research questions is provided below:
- Key Question I – What are the risks and benefits of oral diabetes agents (e.g., second-generation sulfonylureas and metformin) as compared to all types of insulin in women with gestational diabetes?
- Research Gap Number 1
- –
Reason for Gap – biased information (randomized trials not identified)
- –
Population (P) – women with gestational diabetes
- –
Intervention (I) – metformin
- –
Comparison (C) – any insulin
- –
Outcomes (O) – neonatal hypoglycemia and NICU admissions
- –
Settings (S) – any setting
- Research Question Number 1: What is the effectiveness of metformin compared to any insulin in reducing neonatal hypoglycemia and NICU admissions in women with gestational diabetes?
- Research Gap Number 2
- –
Reason for Gap – insufficient information (sample sizes in studies too small)
- –
Population (P) – women with insulin-requiring (type A2) gestational diabetes at 40 weeks of gestation
- –
Intervention (I) – elective labor induction
- –
Comparison (C) – expectant management
- –
Outcomes (O) – emergency cesarean delivery (maternal) and macrosomia (neonatal)
- –
Settings (S) – any setting
- Research Question Number 2: What is the effectiveness of elective labor induction compared to expectant management in preventing emergency cesarean delivery and neonatal macrosomia in women with insulin-requiring (type A2) gestational diabetes at 40 weeks of gestation?
Limitations and Future Research
We identified limited use of formal processes, including frameworks, for identifying research gaps. This prevented us from answering subquestions 1.b. and 1.c. as we were unable to compare existing methods for identifying and presenting research gaps. Further refinement of the framework we propose, and development of other frameworks, would allow future research to assess relative usefulness of different frameworks.
A limitation of the framework that we have developed is that it does not explicitly account for the specificity of research gaps. Team members could differ in terms of the number of research gaps abstracted based on whether gaps are abstracted at the level of the key question or the subquestion. We therefore suggest that a priori decisions be made about the level of specificity that should be accomplished and that investigative teams be consistent. This decision would likely depend upon the topic of interest as well as the specific intervention and comparison. The benefit of being specific needs to be weighed against the time required and the need to identify each specific intervention and comparison of interest.
In identifying research gaps we suggest that investigative teams be consistent and decide a priori about the specificity of research gaps to be identified and presented. It is also important to be consistent and decide a priori which reason will be selected when the gap arises because only one study is identified (i.e., insufficient information or unknown consistency). While identifying reasons why a research gap exists, team members must remember to pick the main reason(s) that prevents conclusions from being made and to be as specific as possible. This would potentially help towards designing the appropriate research to fill that gap.
Our framework calls for identifying the most important reason(s) for existence of research gaps (i.e., reasons that most preclude conclusions from being made). However, there may often be more than one main reason why a research gap exists. Team members could differ on the relative importance of these reasons. Decisions on the relative importance of these reasons are often arbitrary. More research is needed to determine if a hierarchy or ranking system can be established to aid these decisions.
The application of the framework to identify research gaps by our investigative team was quite challenging. Much of this was due to our team being unfamiliar with the evidence reports and trying to retrospectively apply the framework. We suggest that the same investigative team which synthesizes the evidence apply the framework while writing the results. We also suggest that investigative teams working on evidence reports use the SOE for grading the evidence. If this is done, teams can leverage work done in preparing the table to identify research gaps.
Our pilot test relied on applying the worksheet retrospectively on existing evidence reports. Further evaluation is needed to see how the framework performs with other types of reports or questions. Evaluation is needed to determine if the gaps identified using the framework are different than those identified using current methods. This could be assessed by examining the number of gaps identified, the perceived usefulness of the gaps, as assessed by potential stakeholders (usefulness could be further defined as actionable gaps, important gaps, etc.). Future research could have other EPCs use the worksheet during the drafting of an evidence report. Another evaluation could have some members of an EPC team use the worksheet, and others not, to compare the process and outcome (i.e., future research section). Further, the format for presentation of the research gaps could be evaluated for clarity and ease of use by other EPCs as well as by other relevant stakeholders, including researchers and funders.
- Discussion - Frameworks for Determining Research Gaps During Systematic ReviewsDiscussion - Frameworks for Determining Research Gaps During Systematic Reviews
- Streptococcal Meningitis - StatPearlsStreptococcal Meningitis - StatPearls
- Auto-brewery Syndrome - StatPearlsAuto-brewery Syndrome - StatPearls
- Hydrocele - StatPearlsHydrocele - StatPearls
- Hymenoptera Stings - StatPearlsHymenoptera Stings - StatPearls
Your browsing activity is empty.
Activity recording is turned off.
See more...