Identification of Evidence Gaps

We searched Medline from September 16, 2009, through August 11, 2011, for RCTs and observational studies with comparison groups to determine if publications subsequent to the 2010 Alberta EPC systematic review partially filled the previously identified knowledge gaps. We replicated the search strategy used in the original report. The search included a list of terms intended to identify all research publications associated with rotator cuff tears in adults. We further limited searches for trials by terms to identify types of interventions that included a comparison group or included the term “minimally-important clinical difference.” Full search algorithms are available in Appendix A. Publications were assessed for relevance at the title and abstract level; applicable studies were retrieved, reviewed, and matched to one or more existing rotator cuff research gaps.

We searched ( for relevant ongoing studies; we used simple search terms such as “rotator cuff tear.” Trial records were reviewed for relevance based on the patient population of adults with acute or chronic partial- or full-thickness rotator cuff tears. Applicable studies were reviewed and matched to one or more existing research gap area.

Health services and clinical outcomes researchers and orthopedic surgeons of the Minnesota EPC augmented the list of research gaps from the Rotator Cuff systematic review to include additional scientific or methodological knowledge gaps identified before or during the update of the literature and ongoing studies. Examples of some of the items added for stakeholder discussion were the impact of provider experience, degree of provider specialization and surgical quality on outcomes, as well as a clear definition of “rotator cuff integrity” for both nonoperative and postoperative patients.

Criteria for Prioritization

Stakeholders received the Agency for Healthcare Research and Quality’s (AHRQ’s) Selection Criteria for research topics prior to the conference calls. During each call, we asked stakeholders to identify the most salient criteria to use for rotator cuff research. The criteria did not have weights assigned to them; stakeholder responses were thus based on an inductive summary of the criteria. A comments area was made available for stakeholders to note whether particular criteria weighed more heavily in their decisions.

Engagement of Stakeholders

This section provides a brief overview of the methods used for this project. Please refer to Appendixes B through G for more detailed project methods.

We formed a 12-member stakeholder group with broad representation from orthopedic surgeon-researchers, nonoperative health care providers and nonoperative clinician-researchers, professional organizations, federal research funders, payers, and consumers. We particularly sought stakeholders who were familiar with current rotator cuff research practices and knowledge of research design since many research gaps were methodological in nature.

Between September and November 2011, stakeholders participated in at least one of four conference calls, during which they discussed the state of rotator cuff research and provided feedback on the initial list of research gaps. Consumers were convened on a separate call to assure that there was adequate time for discussion of research and care issues in less-technical language than was standard on the professional stakeholder calls. All other calls involved stakeholders according to their scheduling convenience; stakeholders were not segregated by profession or perspective. Prior to the conference call, all stakeholders were provided a written memo on the background and purpose of the project, the original report’s Executive Summary, the initial list of research gaps, a list of new publications and ongoing studies since the original report, the Effective Health Care Selection Criteria, and an agenda.

Email was used for all other stakeholder contact. All 12 stakeholders received a summary of the conference calls, and a revised research gap list based on stakeholder input. Stakeholders were asked to provide further comment or clarification if warranted. We conducted a prioritization activity with 10 of the stakeholders using web-based ranking software developed by the Research Triangle Institute/University of North Carolina EPC.

Handling Conflicts of Interest

Forms for disclosure of conflicts of interests were collected from all stakeholders. No one was prohibited from participating based on disclosures; however, the forms would have allowed us to temper any stakeholder’s contributions if the conversation topic warranted attention.

Since the conference call discussions were focused on how the field needs to address the existing research gaps, rather than the development of specific research questions, it was unlikely that researchers would receive an unfair advantage for future research proposals. Stakeholders used web-based software to rank specific scientific topics during the prioritization exercise, thus researchers and funders were blind to the others’ stated opinions.

Prioritizing Research

Ten stakeholders (nine nonfederal employees, including one consumer, and one federal employee) were asked to prioritize methods related issues separate from scientific research topics. All 10 stakeholders were provided a limited number of stars with which to indicate their selection of high priority issues or topics. Six stars were available to indicate priorities among the 17 methods issues. Nine stars were available for the 27 scientific questions. A stakeholder could assign up to three stars for an issue or item they deemed to be highly important.

Priority scores were calculated by summing the stars assigned by all stakeholders to a methods issue or scientific topic. We determined high priority as items in the top quartile of scores. Unweighted scores were also calculated based on the number of stakeholders voting for an issue or topic by collapsing multiple star assignments by any one stakeholder into a count of one for that item.