• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jmirEditorial BoardMembershipSubmitCurrent IssueArchiveReviewSearchAboutJMIR PublicationsJMIR
J Med Internet Res. 2004 Jul-Sep; 6(3): e34.
Published online Sep 29, 2004. doi:  10.2196/jmir.6.3.e34
PMCID: PMC1550605

Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)

Gunther Eysenbach, MD, MPHcorresponding author
Gunther Eysenbach, University of Toronto, University Health Network, 190 Elizabeth Street, Toronto ON M5G 2C4, Canada, Phone: +1 416 340 4800 ext 6427, Fax: +1 416 340 3595, ac.otnorotu.sernhu@abnesyeg.
Reviewed by M Schonlau

Abstract

Analogous to checklists of recommendations such as the CONSORT statement (for randomized trials), or the QUORUM statement (for systematic reviews), which are designed to ensure the quality of reports in the medical literature, a checklist of recommendations for authors is being presented by the Journal of Medical Internet Research (JMIR) in an effort to ensure complete descriptions of Web-based surveys. Papers on Web-based surveys reported according to the CHERRIES statement will give readers a better understanding of the sample (self-)selection and its possible differences from a “representative” sample. It is hoped that author adherence to the checklist will increase the usefulness of such reports.

Introduction

The Internet is increasingly used for online surveys and Web-based research. In this issue of the Journal of Medical Internet Research we publish two methodological studies exploring the characteristics of Web-based surveys compared to mail-based surveys [1,2]. In previous issues we have published Web-based research such as a survey among physicians conducted on a Web site [3].

As explained in an accompanying editorial [4] as well as in a previous review [5], such surveys can be subject to considerable bias. In particular, bias can result from 1) the non-representative nature of the Internet population and 2) the self-selection of participants (volunteer effect). Often online surveys have a very low response rate (if the number of visitors is used as denominator). Thus, considerable debate ensues about the validity of online surveys. The editor and peer reviewers of this journal are frequently faced with the question of whether to accept for publication studies reporting results from Web surveys (or email surveys). There is no easy answer to this question. Often it “just depends”. It depends on the reasons for the survey in the first place, its execution, and the authors' conclusions. Conclusions drawn from a convenience sample are limited and need to be qualified in the discussion section of a paper. On the other hand, we will not, as many other journals do, routinely reject reports of Web surveys, even surveys with very small response rates, which are typical of electronic surveys, but decide on a case-by-case basis whether the conclusions drawn from a Web survey are valid and useful for readers. Web surveys may be of some use in generating hypotheses which need to be confirmed in a more controlled environment; or they may be used to pilot test a questionnaire or to conduct a Web-based experiment. Statistical methods such as propensity scores may be used to adjust results [4]. Again, it all depends on why and how the survey was done.

Every biased sample is an unbiased sample of another target population, and it is sometimes just a question of defining for which subset of a population the conclusions drawn are assumed to be valid. For example, the polling results on the CNN Web site are certainly highly biased and not representative for the US population. But it is legitimate to assume that they are “representative” for visitors to the CNN Web site who choose to participate in the online survey.

This illustrates the critical importance of carefully describing how and in what context the survey was done, and how the sample, which chose to reply, is constituted and might differ from a representative population-based sample. For example, it is very important to describe the content and nature of the Web site where the survey was posted in order to get an idea of the people who filled in the questionnaire (ie, to characterize the population of respondents). A survey on an anti-vaccination Web site run by concerned parents will have a different visitor structure than, for example, a vaccination clinic site. It is also important to describe in sufficient detail exactly how the questionnaire was administered. For example, was it mandatory that every visitor who wanted to enter the Web site fill it in, or were any other incentives offered? A mandatory survey is likely to reduce a volunteer bias.

Analogous to checklists of recommendations such as the CONSORT statement (for randomized trials), or the QUORUM statement (for systematic reviews), which are designed to ensure the quality of reports in the medical literature, a checklist of recommendations for authors is being presented by JMIR in an effort to ensure complete descriptions of e-survey methodology. Papers reported according to the CHERRIES statement will give peer reviewers and readers a better understanding of the sample selection and its possible differences from a “representative” sample.

The CHERRIES Checklist

We define an e-survey as an electronic questionnaire administered on the Internet or an Intranet. Although many of the CHERRIES items are also valid for surveys administered via e-mail, the checklist focuses on Web-based surveys.

While most items on the checklist are self-explanatory, a few comments about the “response rate” are in order. In traditional surveys investigators usually report a response rate (number of people presented with a questionnaire divided by the number of people who completed the questionnaire) to allow some estimation of the degree of representativeness and bias. Surveys with response rates lower than 70% or so (an arbitrary cut-off point!) are usually viewed with skepticism.

In online surveys, there is no single response rate. Rather, there are multiple potential methods for calculating a response rate, depending on what are chosen as the numerator and denominator. As there is no standard methodology, we suggest avoiding the term “response rate” and have defined how, at least in this journal, response metrics such as, what we call, the view rate, participation rate and completion rate should be calculated.

A common concern for online surveys is that a single user fills in the same questionnaire multiple times. Some users like to go back to the survey and experiment with the results of their modified entries. Multiple methods are available to prevent this or at least to minimize the chance of this happening (eg, cookies or log-file/IP address analysis).

Investigators should also state whether the completion or internal consistency of certain (or all) items was enforced using Javascript (ie, displaying an alert before the questionnaire can be submitted) or server-side techniques (ie, after submission displaying the questionnaire and highlighting mandatory but unanswered items or items answered inconsistently).

The hope is that the CHERRIES checklist provides a useful starting point for investigators reporting results of Web surveys. The editor and peer reviewers of this journal ask authors to ensure that they report the methodology fully and according to the CHERRIES checklist before submitting manuscripts.

Table 1
Checklist for Reporting Results of Internet E-Surveys (CHERRIES)

References

1. Ritter Philip, Lorig Kate, Laurent Diana, Matthews Katy. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res. 2004 Sep 15;6(3):e29. doi: 10.2196/jmir.6.3.e29. http://www.jmir.org/2004/3/e29/v6e29 [PMC free article] [PubMed] [Cross Ref]
2. Leece Pam, Bhandari Mohit, Sprague Sheila, Swiontkowski Marc F, Schemitsch Emil H, Tornetta Paul, Devereaux P J, Guyatt Gordon H. Internet versus mailed questionnaires: a randomized comparison (2) J Med Internet Res. 2004 Sep 24;6(3):e30. doi: 10.2196/jmir.6.3.e30. http://www.jmir.org/2004/3/e30/v6e30 [PMC free article] [PubMed] [Cross Ref]
3. Potts Henry W W, Wyatt Jeremy C. Survey of doctors' experience of patients using the Internet. J Med Internet Res. 2002 Mar 31;4(1):e5. doi: 10.2196/jmir.4.1.e5. http://www.jmir.org/2002/1/e5/ [PMC free article] [PubMed] [Cross Ref]
4. Schonlau Matthias. Will web surveys ever become part of mainstream research? J Med Internet Res. 2004 Sep 23;6(3):e31. doi: 10.2196/jmir.6.3.e31. http://www.jmir.org/2004/3/e31/v6e31 [PMC free article] [PubMed] [Cross Ref]
5. Eysenbach Gunther, Wyatt Jeremy. Using the Internet for surveys and health research. J Med Internet Res. 2002 Nov 22;4(2):e13. doi: 10.2196/jmir.4.2.e13. http://www.jmir.org/2002/2/e13/ [PMC free article] [PubMed] [Cross Ref]

Articles from Journal of Medical Internet Research are provided here courtesy of Gunther Eysenbach
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...