NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Feb.

Cover of Framework for Determining Research Gaps During Systematic Review: Evaluation

Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet].

Show details


We completed four steps as outlined below.

Review and Revise Framework and Develop Detailed Instructions

The framework and instructions were reviewed by team members, some of whom were not involved in the initial project. The framework and instructions were modified based on discussion.

Test Framework and Instructions Through Application to Existing Systematic Reviews

We tested the application of the revised framework and instructions with a sample of 50 systematic reviews of randomized controlled trials of clinical topics.

Identification and Selection of Systematic Reviews

We applied the framework to all eligible EPC reports from 2009 to 2011. (Reports from 2007 to 2008 were included in the audit conducted in our prior report). We searched the AHRQ Web site for reports posted from January 1, 2009 to December 12, 2011 ( We retrieved reports for consideration by selecting the heading “Search for Guides, Reviews, and Reports”; selecting, under Report Types, “Research Reviews” and then selecting, under Project Status, “Final.”

We also applied the framework to a random sample of Cochrane systematic reviews from 2009 to 2011. We searched The Cochrane Database of Systematic Reviews for reviews published from January 1, 2009, to December 12, 2011. The search was completed by selecting the date range 2009-2011, all issues, and restricting to “reviews.”

Search results for the EPC reports and Cochrane reviews were screened serially by two team members using title and abstract to identify systematic reviews that:

  • were published or completed within the time range of interest
  • represented final or complete reviews
  • addressed a clinical topic
  • addressed questions about effectiveness or comparative effectiveness of therapies
  • included randomized controlled trials

All eligible EPC reports were included. All Cochrane reviews were entered with a corresponding autogenerated reference number into a spreadsheet for random selection. Randomly selected Cochrane reviews were then screened using criteria and process described above. We selected the number of Cochrane reviews that, when added to the included EPC reports, would equal a combined total of 50 systematic reviews.

Application of Framework to Systematic Reviews

Four team members applied the framework to the 50 systematic reviews, as pairs of independent reviewers for each systematic review. Each reviewer had a background in epidemiology and was specifically trained in the use of the framework. To track progress and maintain the results, the framework worksheet was translated to forms on DistillerSR (EvidencePartners, Ottawa, ON, Canada) and full-text articles of all eligible systematic reviews were uploaded. Pilot testing of the revised framework (from Review and Revise Framework and Develop Detailed Instructions above) was conducted in October and November 2011. A training session on the use of the framework as translated into online forms was held December 9, 2011. Pilot testing of the system in DistillerSR was completed at the end of December 2011, with abstraction starting December 22, 2011. Abstraction was completed by April 1, 2011. Reviewers were asked to track and share any issues encountered in applying the framework. A comparison of the information abstracted by each reviewer was also completed to highlight any discrepancies that might indicate issues to address in the framework or instructions. A third team member reviewed all abstractions and brought forward to the team any apparent discrepancies or issues in the characterization of gaps or the reasons for gaps. These were discussed and common issues identified, for which responses were determined (i.e., revisions to framework or instructions).

Evaluate Implementation of Framework

We issued multiple invitations for the 14 EPCs to apply the framework to identify gaps in one or more of one of their projects. An invitation was issued during presentations at both the spring and fall 2011 EPC Directors' meetings, as well as via email (January 2012). EPCs were informed that any costs for participation in this, as for other methods projects or workgroups, could be covered under a general task order through the EPC program. EPCs agreeing to participate were sent reminders in May, June and July 2012.

An evaluation form was developed (Appendix B) to solicit structured feedback from the EPCs. Open-ended questions requested feedback on specific advantages and challenges encountered in applying the framework. There was no restriction on the type of review or future research needs project (FRN), in terms of question(s) or study design, that the EPC could consider using in applying the framework. (FRNs are projects within the EPC program that engage various stakeholders to develop and prioritize future research needs identified from EPC evidence reviews.) EPCs were asked to submit a completed evaluation form after use of the framework. EPCs were not asked to submit completed framework worksheets.

Revise and Finalize Framework and Instructions

Based on results of the evaluations, our team revised the framework and instructions.

Peer Review and Public Commentary

A draft of this report was reviewed by AHRQ representatives and peer reviewers, and was posted for public view and comment. Comments received were reviewed and a report of comments and their disposition was prepared and submitted with the revised report.


Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...