Send to

Choose Destination

See 1 citation found by title matching your search:

J Med Internet Res. 2017 Oct 6;19(10):e341. doi: 10.2196/jmir.8821.

The Promise and Pitfalls of Using Crowdsourcing in Research Prioritization for Back Pain: Cross-Sectional Surveys.

Author information

Surgical Outcomes Research Center, Department of Surgery, University of Washington, Seattle, WA, United States.
Comparative Effectiveness, Cost and Outcomes Research Center, Department of Radiology, University of Washington, Seattle, WA, United States.
Center for Biomedical Statistics, Department of Biostatistics, University of Washington, Seattle, WA, United States.
Department of Heath Services, School of Public Health, University of Washington, Seattle, WA, United States.



The involvement of patients in research better aligns evidence generation to the gaps that patients themselves face when making decisions about health care. However, obtaining patients' perspectives is challenging. Amazon's Mechanical Turk (MTurk) has gained popularity over the past decade as a crowdsourcing platform to reach large numbers of individuals to perform tasks for a small reward for the respondent, at small cost to the investigator. The appropriateness of such crowdsourcing methods in medical research has yet to be clarified.


The goals of this study were to (1) understand how those on MTurk who screen positive for back pain prioritize research topics compared with those who screen negative for back pain, and (2) determine the qualitative differences in open-ended comments between groups.


We conducted cross-sectional surveys on MTurk to assess participants' back pain and allow them to prioritize research topics. We paid respondents US $0.10 to complete the 24-point Roland Morris Disability Questionnaire (RMDQ) to categorize participants as those "with back pain" and those "without back pain," then offered both those with (RMDQ score ≥7) and those without back pain (RMDQ <7) an opportunity to rank their top 5 (of 18) research topics for an additional US $0.75. We compared demographic information and research priorities between the 2 groups and performed qualitative analyses on free-text commentary that participants provided.


We conducted 2 screening waves. We first screened 2189 individuals for back pain over 33 days and invited 480 (21.93%) who screened positive to complete the prioritization, of whom 350 (72.9% of eligible) did. We later screened 664 individuals over 7 days and invited 474 (71.4%) without back pain to complete the prioritization, of whom 397 (83.7% of eligible) did. Those with back pain who prioritized were comparable with those without in terms of age, education, marital status, and employment. The group with back pain had a higher proportion of women (234, 67.2% vs 229, 57.8%, P=.02). The groups' rank lists of research priorities were highly correlated: Spearman correlation coefficient was .88 when considering topics ranked in the top 5. The 2 groups agreed on 4 of the top 5 and 9 of the top 10 research priorities.


Crowdsourcing platforms such as MTurk support efforts to efficiently reach large groups of individuals to obtain input on research activities. In the context of back pain, a prevalent and easily understood condition, the rank list of those with back pain was highly correlated with that of those without back pain. However, subtle differences in the content and quality of free-text comments suggest supplemental efforts may be needed to augment the reach of crowdsourcing in obtaining perspectives from patients, especially from specific populations.


Amazon Mechanical Turk; MTurk; back pain; comparative effectiveness research; crowdsourcing; low back pain; patient engagement; patient participation; research prioritization; stakeholder engagement

[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for JMIR Publications Icon for PubMed Central
Loading ...
Support Center