• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jmirEditorial BoardMembershipSubmitCurrent IssueJ Med Internet Res
J Med Internet Res. 2007 Jul-Sep; 9(3): e25.
Published online Sep 30, 2007. doi:  10.2196/jmir.9.3.e25
PMCID: PMC2047288

Response Rate and Completeness of Questionnaires: A Randomized Study of Internet Versus Paper-and-Pencil Versions

Sissel Marie Kongsved, MS,1 Maja Basnov, MS,1 Kurt Holm-Christensen, MD,2 and Niels Henrik Hjollund, MD, PhDcorresponding author1,3
3Department of Clinical Social Medicine, Institute of Public Health, Aarhus University, Aarhus, Denmark
2Department of Radiology, Randers Regional Hospital, Randers, Denmark
1Department of Clinical Social Medicine, Centre of Public Health, Central Denmark Region, Aarhus, Denmark
Niels Henrik Hjollund, Department of Clinical Social Medicine, Institute of Public Health, Aarhus University, Olof Palmes Allé 15, 8200 Aarhus N, Denmark, Phone: +45 87 28 47 40, Fax: +45 87 28 47 02, nhh/at/folkesundhed.au.dk.
Reviewed by Jom Olsen

Abstract

Background

Research in quality of life traditionally relies on paper-and-pencil questionnaires. Easy access to the Internet has inspired a number of studies that use the Internet to collect questionnaire data. However, Internet-based data collection may differ from traditional methods with respect to response rate and data quality as well as the validity and reliability of the involved scales.

Objective

We used a randomized design to compare a paper-and-pencil questionnaire with an Internet version of the same questionnaire with respect to differences in response rate and completeness of data.

Methods

Women referred for mammography at a Danish public hospital from September 2004 to April 2005, aged less than 67 years and without a history of breast cancer, were eligible for the study. The women received the invitation to participate along with the usual letter from the Department of Radiology. A total of 533 women were invited to participate. They were randomized to receive either a paper questionnaire, with a prepaid return envelope, or a guideline on how to fill in the Internet-based version online. The questionnaire consisted of 17 pages with a total of 119 items, including the Short Form-36, Multidimensional Fatigue Inventory-20, Hospital Anxiety and Depression Scale, and questions regarding social status, education level, occupation, and access to the Internet. Nonrespondents received a postal reminder giving them the option of filling out the other version of the questionnaire.

Results

The response rate before the reminder was 17.9% for the Internet group compared to 73.2% for the paper-and-pencil group (risk difference 55.3%, P < .001). After the reminder, when the participant could chose between versions of the questionnaire, the total response rate for the Internet and paper-and-pencil group was 64.2% and 76.5%, respectively (risk difference 12.2%, P = .002). For the Internet version, 97.8% filled in a complete questionnaire without missing data, while 63.4% filled in a complete questionnaire for the paper-and-pencil version (risk difference 34.5%, P < .001).

Conclusions

The Internet version of the questionnaire was superior with respect to completeness of data, but the response rate in this population of unselected patients was low. The general population has yet to become more familiar with the Internet before an online survey can be the first choice of researchers, although it is worthwhile considering within selected populations of patients as it saves resources and provides more complete answers. An Internet version may be combined with the traditional version of a questionnaire, and in follow-up studies of patients it may be more feasible to offer Internet versions.

Keywords: Questionnaire design, random allocation, Internet, postal service, evaluation, data collection methodology

Introduction

Research in quality of life traditionally relies on paper-and-pencil questionnaires. Internet surveys may have advantages compared to the traditional paper-and-pencil surveys with respect to turn-a-round time, expenses, and data management [1]. However, Internet-based data collection may differ from traditional methods with respect to response rate and data quality as well as validity and reliability of the involved scales. Only a few studies have systematically evaluated Internet-based survey methods. The main questions have addressed validity [2-7], response rate, response speed, and completeness of data [1,6-14].

Most studies report small differences in answers obtained in Internet and paper-and-pencil versions of questionnaires [2-7]. Pealer et al found no significant difference in response rates, the Internet version having a response rate of 62% compared to 58% for the paper-and-pencil version [12]. Ritter et al found a high response rate in both groups of a study population recruited on the Internet: 87% in the Internet group, and 83% in the paper-and-pencil group [9]. These studies either recruited their participants on the Internet or invited only participants with a known active email account, and, as a consequence, the results from these studies are not valid for a general population of patients. A Swedish study conducted in a general population sample obtained a response rate of 50% in the Internet group and 64% in the paper-and-pencil group. The method included two reminders, including a contact by telephone [6]. However, in a workplace health survey, a poor response rate was observed among the Internet group (19%) compared to the paper-and-pencil group (72%), but this study did not include a reminder procedure [1].

Overall, the results with respect to differences in response rate are inconsistent, which may reflect differences in methodology and populations. We have not identified any randomized studies comparing Internet and paper-and-pencil questionnaires in patient populations unselected with respect to Internet access. Therefore, we aimed to evaluate an Internet survey method in comparison to paper-and-pencil with respect to response rate and completeness of data in a randomized controlled design among women referred for mammography.

Methods

Participants were women referred for mammography from September 2004 to April 2005 in the Department of Radiology at the public hospital, Randers Regional Hospital. The municipality of Randers has around 62000 inhabitants. Patients were referred by their family doctor. A consultant at the Department of Radiology assigned the referred patients to one of three categories: acute, subacute, or nonacute. Subsequently, a letter was sent to the woman informing her about the date, location, and other details of the mammography. The women were randomized to be invited to answer either an Internet version or a paper-and-pencil version of a questionnaire. We only invited women up to retirement age (67 years in Demark) who did not have a history of breast cancer. Patients from all categories (acute, subacute, or nonacute) were invited until February 2005, whereafter only patients in the acute and subacute groups were invited to participate.

Nonrespondents in both groups were mailed a reminder after 10 days, given that the date of their mammography was not reached. The reminder informed the woman that she was free to answer the opposite version of the originally requested questionnaire if she so desired. Only questionnaires filled in before the date of the mammography were included in the analysis. The procedure is outlined in Figure 1. There were no incentives to promote the survey response.

Figure 1
Flow of participants through the randomized trial

The letter to women randomized to answer the paper-and-pencil version included a paper questionnaire and a prepaid return envelope, while the letter to women randomized to answer the Internet version included a guideline on how answer the Web-based version. Access to the Internet questionnaire required entry of a unique five-letter username. No password was needed since the first letter in the username was a redundancy code. The layout of the Internet version was as close to the paper version as possible (see Figure 2 and Figure 3). In the Internet version, the participants were reminded of missing answers if they tried to leave a page incomplete. However, after pressing an “OK” button, they were allowed to continue even if there were still missing answers [15]. The questionnaire consisted of 17 pages and 119 items and included Short Form-36 [16], Multidimensional Fatigue Inventory-20 [17], and The Hospital Anxiety and Depression Scale [18]. Questions regarding social status, education level, occupation, and access to the Internet were also asked.

Figure 2
Screenshot of the Internet version of the questionnaire
Figure 3
Photograph of the paper-and-pencil version of the questionnaire

All respondents were interviewed by telephone 1 month after they had their mammogram. They were invited to join a follow-up study and were asked to select the version of questionnaire they preferred.

The sample size was calculated to provide a statistical power of at least 90% to detect a true difference in response rate of 15%. The actual power was 93.8%. Women had an equal probability of assignment to the two groups. The randomization code was developed using a computer random number generator. We tested the significance of categorized variables by the chi-square test and compared continuous variables by risk differences with 95% confidence intervals. Homogeneity across strata was tested with the Mantel-Haenszel test.

Results

The characteristics of the invited women are shown in Table 1. Approximately 80% of the women were between 30 and 59 years old. The distributions within the two randomized groups were similar with respect to age, place of residence, and category of referral.

Table 1
Characteristics of patients, by randomization group

The response rate before the reminder was 17.9% in the Internet group compared to 73.2% in the paper-and-pencil group, corresponding to a 55.3% difference in response rate in favor of the paper-and-pencil version (Table 2). The same tendency was found in all strata with respect to age, place of residence, and category of referral (see Table 2).

Table 2
Response rate before reminder, by randomization group

After the reminder, the response rate improved distinctly in the group originally randomized to the Internet (Table 3). Among the women assigned to the nonacute group, who had the longest respite before their mammogram, the response rate was even higher in the group randomized to the Internet version.

Table 3
Response rate after reminder, by randomization group

The completeness of answers in the two versions is summarized in Table 4. The Internet version produced significantly more complete questionnaires than the paper-and-pencil version. For the paper-and-pencil version, there was a tendency toward more incomplete scales the longer the scales were.

Table 4
Completeness of the three scales, by version

The scores for the eight subscales of Short Form-36 are displayed in Table 5. There were no statistically significant differences between the two versions.

Table 5
Scores for subscales of Short Form-36, by randomization group

During the telephone interview with the respondents 1 month after they had their mammogram, they were invited to participate in the follow-up part of the study. They were asked to select the version of future questionnaires they preferred. The majority (55.4%) preferred the paper-and-pencil version, while 32.4% preferred the Internet version. The remaining 17.1% declined further participation. Among the 46 respondents from the Internet group, 73.2% preferred to continue on the Internet compared to 17.1% who preferred to change to a paper-and-pencil version.

Access to Internet, estimated by answers from the paper-and-pencil group, is displayed in Table 6.

Table 6
Internet access among the paper-and-pencil group

Discussion

We found an initial response rate of only 17.9% in the Internet group compared to 73.2% in the paper-and-pencil group. However, after a reminder, when the participants were free to choose between versions, the total response rate was similar in the two randomized groups. The quality of data regarding completeness was superior in the Internet version for all the involved scales. We did not identify any differences in Short Form-36 subscales. However, even in a randomized study, caution should be exercised when comparing the distribution of answers between the two groups since the distributions depend on differences in the two methods as well as selection bias, especially when the response rate in one of the groups is very low.

The population was unselected with respect to Internet access and experience. According to the 2005 Statistics Denmark survey, 77% of Danish women had access to the Internet [19]. Based on answers from the paper-and-pencil group, we estimate that 70% of the women in the present study had access to the Internet at home. Access was closely associated with level of education. The geographic area surrounding the public hospital includes rural locations as well as the fifth largest city in Denmark. We consider our sample representative for female patients in Denmark.

The most prominent weakness of the Internet version was a low response rate, and we could not identify any single determinant factor. However, as expected, the response rate was highest in the age group with greatest access to the Internet. After a reminder letter, which stated that participants were free to fill out their preferred version of the questionnaire, the total response rates were nearly the same. However, women in the acute and subacute groups had less time to complete the questionnaire before their mammogram, which in some cases prevented the reminder.

Response rates to Internet questionnaires reported in the literature vary a lot between studies [1,6-14]. It is evident that studies conducted in populations with known access to the Internet are supposed to have higher response rate than studies of populations without known access, like the present study. However, differences in response rate may also be attributed to methodology and other characteristics of the population. A Swedish study compared the same paper-and-pencil questionnaire in two different versions with respect to ordering of questions and level of difficulty and found that the proportion of completers varied significantly [20]. It is plausible that populations of patients and general population samples may react differently to an invitation to complete a Web questionnaire about health-related issues.

The fact that only 17.1% of respondents in the Internet group preferred to shift to the paper-and-pencil version when asked to join the follow-up study indicates that Internet versions may be more feasible in follow-up studies. One advantage of the Internet version is a high degree of completeness, and the design of Internet questionnaires allows the researcher to compensate for human error among participants who enter inconsistent answers or accidentally skip an item or even a page.

At present, Internet questionnaires can hardly stand alone as the method of data collection in studies of patients. Access to the Internet still depends on socioeconomic factors, and results obtained solely from Internet users may be biased. The general population must become more familiar with the Internet before an online survey can be the first choice of researchers, although it is worthwhile considering within selected populations of patients as it saves resources and provides more complete answers. An Internet version may be combined with a traditional version, and it may be more feasible to offer Internet versions in follow-up studies.

Acknowledgments

This work was supported by a grant from the Danish Cancer Society.

Footnotes

Conflicts of Interest:

None declared.

References

1. Jones R, Pitt N. Health surveys in the workplace: comparison of postal, email and World Wide Web methods. Occup Med (Lond) 1999 Nov;49(8):556–8. doi: 10.1093/occmed/49.8.556. http://occmed.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=10658310. [PubMed] [Cross Ref]
2. Fouladi Rachel T, Mccarthy Christopher J, Moller Naomi P. Paper-and-pencil or online? Evaluating mode effects on measures of emotional functioning and attachment. Assessment. 2002 Jun;9(2):204–15. doi: 10.1177/10791102009002011. [PubMed] [Cross Ref]
3. Bliven B D, Kaufman S E, Spertus J A. Electronic collection of health-related quality of life data: validity, time benefits, and patient preference. Qual Life Res. 2001;10(1):15–22. doi: 10.1023/A:1016740312904. [PubMed] [Cross Ref]
4. Davis R N. Web-based administration of a personality questionnaire: comparison with traditional methods. Behav Res Methods Instrum Comput. 1999 Nov;31(4):572–7. [PubMed]
5. Buchanan T, Smith J L. Using the Internet for psychological research: personality testing on the World Wide Web. Br J Psychol. 1999 Feb;90 ( Pt 1)(Pt 1):125–44. doi: 10.1348/000712699161189. [PubMed] [Cross Ref]
6. Bälter Katarina Augustsson, Bälter Olle, Fondell Elinor, Lagerros Ylva Trolle. Web-based and mailed questionnaires: a comparison of response rates and compliance. Epidemiology. 2005 Jul;16(4):577–9. doi: 10.1097/01.ede.0000164553.16591.4b.00001648-200507000-00024 [PubMed] [Cross Ref]
7. Ekman Alexandra, Dickman Paul W, Klint Asa, Weiderpass Elisabete, Litton Jan-Eric. Feasibility of using web-based questionnaires in large population-based epidemiological studies. Eur J Epidemiol. 2006;21(2):103–11. doi: 10.1007/s10654-005-6030-4. [PubMed] [Cross Ref]
8. Leece Pam, Bhandari Mohit, Sprague Sheila, Swiontkowski Marc F, Schemitsch Emil H, Tornetta Paul, Devereaux P J, Guyatt Gordon H. Internet versus mailed questionnaires: a randomized comparison (2) J Med Internet Res. 2004 Sep 24;6(3):e30. doi: 10.2196/jmir.6.3.e30. http://www.jmir.org/2004/3/e30/v6e30 [PMC free article] [PubMed] [Cross Ref]
9. Ritter Philip, Lorig Kate, Laurent Diana, Matthews Katy. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res. 2004 Sep 15;6(3):e29. doi: 10.2196/jmir.6.3.e29. http://www.jmir.org/2004/3/e29/v6e29 [PMC free article] [PubMed] [Cross Ref]
10. Boeckner Linda S, Pullen Carol H, Walker Susan Noble, Abbott Gerald W Abbott, Block Torin. Use and reliability of the World Wide Web version of the Block Health Habits and History Questionnaire with older rural women. J Nutr Educ Behav. 2002 Mar;34(Suppl 1):S20–4. doi: 10.1016/S1499-4046(06)60307-2. [PubMed] [Cross Ref]
11. Harewood G C, Yacavone R F, Locke G R, Wiersema M J. Prospective comparison of endoscopy patient satisfaction surveys: e-mail versus standard mail versus telephone. Am J Gastroenterol. 2001 Dec;96(12):3312–7. doi: 10.1111/j.1572-0241.2001.05331.x. [PubMed] [Cross Ref]
12. Pealer L N, Weiler R M, Pigg R M, Miller D, Dorman S M. The feasibility of a web-based surveillance system to collect health risk behavior data from college students. Health Educ Behav. 2001 Oct;28(5):547–59. doi: 10.1177/109019810102800503. [PubMed] [Cross Ref]
13. Paolo A M, Bonaminio G A, Gibson C, Partridge T, Kallail K. Response rate comparisons of e-mail- and mail-distributed student evaluations. Teach Learn Med. 2000;12(2):81–4. doi: 10.1207/S15328015TLM1202_4. [PubMed] [Cross Ref]
14. Truell Allen D, Bartlett James E, Alexander Melody W. Response rate, speed, and completeness: a comparison of Internet-based and mail surveys. Behav Res Methods Instrum Comput. 2002 Feb;34(1):46–9. [PubMed]
15. Example of the Web-based questionnaire. HvordanHarDuDet. [2007 Jun 1]. webcite http://hvordanhardudet.dk/PHPscripts/testspm.htm.
16. Ware J E, Sherbourne C D. The MOS 36-item short-form health survey (SF-36). I. Conceptual framework and item selection. Med Care. 1992 Jun;30(6):473–83. doi: 10.1097/00005650-199206000-00002. [PubMed] [Cross Ref]
17. Smets E M, Garssen B, Bonke B, De Haes J C. The Multidimensional Fatigue Inventory (MFI) psychometric qualities of an instrument to assess fatigue. J Psychosom Res. 1995 Apr;39(3):315–25. doi: 10.1016/0022-3999(94)00125-O.002239999400125O [PubMed] [Cross Ref]
18. Zigmond A S, Snaith R P. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983 Jun;67(6):361–70. doi: 10.1111/j.1600-0447.1983.tb09716.x. [PubMed] [Cross Ref]
19. Access to the Internet 2005 [Befolkningens brug af Internet 2005] Statistics Denmark. [2007 Jun 1]. webcite http://www.dst.dk/upload/befolkningens_brug_af_internet_2005_002.pdf.
20. Ekman Alexandra, Klint Asa, Dickman Paul W, Adami Hans-Olov, Litton Jan-Eric. Optimizing the design of web-based questionnaires--experience from a population-based study among 50,000 women. Eur J Epidemiol. 2007;22(5):293–300. doi: 10.1007/s10654-006-9091-0. [PubMed] [Cross Ref]

Articles from Journal of Medical Internet Research are provided here courtesy of Gunther Eysenbach

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...