Logo of jmirEditorial BoardMembershipSubmitCurrent IssueArchiveReviewSearchAboutJMIR PublicationsJMIR
J Med Internet Res. 2002 Oct-Dec; 4(2): e13.
Published online Nov 22, 2002. doi:  10.2196/jmir.4.2.e13
PMCID: PMC1761932

Using the Internet for Surveys and Health Research

Reviewed by Bruce McKenzie

Abstract

This paper concerns the use of the Internet in the research process, from identifying research issues through qualitative research, through using the Web for surveys and clinical trials, to pre-publishing and publishing research results. Material published on the Internet may be a valuable resource for researchers desiring to understand people and the social and cultural contexts within which they live outside of experimental settings, with due emphasis on the interpretations, experiences, and views of `real world' people. Reviews of information posted by consumers on the Internet may help to identify health beliefs, common topics, motives, information, and emotional needs of patients, and point to areas where research is needed. The Internet can further be used for survey research. Internet-based surveys may be conducted by means of interactive interviews or by questionnaires designed for self-completion. Electronic one-to-one interviews can be conducted via e-mail or using chat rooms. Questionnaires can be administered by e-mail (e.g. using mailing lists), by posting to newsgroups, and on the Web using fill-in forms. In "open" web-based surveys, selection bias occurs due to the non-representative nature of the Internet population, and (more importantly) through self-selection of participants, i.e. the non-representative nature of respondents, also called the `volunteer effect'. A synopsis of important techniques and tips for implementing Web-based surveys is given. Ethical issues involved in any type of online research are discussed. Internet addresses for finding methods and protocols are provided. The Web is also being used to assist in the identification and conduction of clinical trials. For example, the web can be used by researchers doing a systematic review who are looking for unpublished trials. Finally, the web is used for two distinct types of electronic publication. Type 1 publication is unrefereed publication of protocols or work in progress (a `post-publication' peer review process may take place), whereas Type 2 publication is peer-reviewed and will ordinarily take place in online journals.

Keywords: Clinical Trials, Confidentiality, Data Collection, Ethics, Research, Evaluation Studies, Informed Consent, Internet, Patient Selection, Qualitative Research, Research Design, Selection bias, Survey research, Research Subjects

Identifying issues for qualitative research

As the most comprehensive archive of written material representing our world and people's opinions, concerns, and desires (in industrialized countries), the Internet can be used to identify `issues' for qualitative (descriptive) research and to generate hypotheses. Material published on the Internet may be a valuable resource for researchers desiring to understand people and the social and cultural contexts within which they live--outside of experimental settings--with due emphasis on the interpretations, experiences, and views of `real world' people. Reviews of information posted by consumers on the Internet may help to identify health beliefs, common topics, motives, information, and emotional needs of patients, and point to areas where research is needed. Comparing recommendations found on the Web against evidence-based guidelines is one way to identify areas where there is a gap between opinion and evidence, or where there is a need for clinical innovation.

The accessibility of information for analysis and the anonymity of the Internet allow researchers to analyse text and narratives on Web sites, to use newsgroups as global focus groups, and to conduct interviews and surveys via e-mail, chat rooms, Web sites, or newsgroups.Topics suited to qualitative research include:

  • Analysis of interactive communications (e.g. e-mail).
  • Study of online communities (virtual self-help groups, newsgroups, mailing lists).
  • Investigation of communication processes between patients and professionals.
  • Study of consumer preferences, patient concerns, and information needs.
  • Exploration of the `epidemiology of health information' on the Web [1-2].

The Internet population is unrepresentative of the general population, restricting the use of the Internet for quantitative studies (i.e. studies focusing on measurement). Qualitative studies, however, do not require representative samples:`In qualitative research we are not interested in an average view of a patient population, but want to gain an in-depth understanding of the experience of particular individuals or groups; we should therefore deliberately seek out individuals or groups who fit the bill' [3]. Three different research methodologies for qualitative research on the Internet may be distinguished:

  • Passive analysis: For example, studying information on Web sites or interactions in newsgroups, mailing lists, and chat rooms--without researchers actively involving themselves.
  • Active analysis: Also called participant observation; the researcher participates in the communication process, often without disclosing their identity as researcher. For example, they may ask questions in a patient discussion group implying that she or he is a fellow patient. Such studies often involve elements of deception, unless the researcher is a sufferer him- or herself.
  • Interviews and surveys: See below.

Examples of these three types of qualitative research on the Internet are available elsewhere [1].

Using the Internet for surveys

Using the Internet for surveys requires an awareness of methodologies, selection bias, and technical issues.

Methodological issues

Internet-based surveys may be conducted by means of interactive interviews or by questionnaires designed for self-completion. Electronic one-to-one interviews can be conducted via e-mail or using chat rooms. Questionnaires can be administered by e-mail (e.g. using mailing lists), by posting to newsgroups, and on the Web using fill-in forms.

When e-mail is used to administer questionnaires, messages are usually sent to a selected group with a known number of participants, thus allowing calculation of the response rate. Surveys posted to newsgroups may request that the completed questionnaire is posted back to the researcher, but it is impossible to know who and how many people read the questionnaire. If Web-based forms are used, questionnaires can be placed in a password-protected area of a Web site (i.e. participation by invitation or registration only), or alternatively they may be open to the public (i.e. any site visitor can complete the survey). The latter option makes calculation of a response rate more difficult but not impossible: the number of people who access (without necessarily completing) the questionnaire is counted and used as the denominator. Web-based surveys have the advantage that the respondent can remain anonymous (as opposed to e-mail surveys, where the e-mail address of the responder is revealed). Furthermore, they are very convenient for the researcher, as responses can be directly stored in a database where they are immediately accessible for analysis.

Electronic interviews and surveys (`e-surveys') are emerging scientific research methodologies, pioneered by communication scientists, sociologists, and psychologists, although their use for health-related research is still in its infancy [4-10]. Examples of health-related research include:

  • A Web-based survey on the effects of ulcerative colitis on quality of life [11].
  • Collection of clinical data from atopy patients [12].
  • A Web-based survey looking at complementary and alternative medicine use by patients with inflammatory bowel disease and Internet access [13].
  • A survey of dentists regarding the usefulness of the Internet in supporting patient care [14,15].

E-surveys may be part of a qualitative research process, but results can be analysed quantitatively as long as researchers are aware of potential bias (see below). In addition to gathering data, the Internet may also be used in the course of developing questionnaires, as it allows rapid prototyping and pilot testing of instruments, e.g. to evaluate the effect of framing the questions differently [16].

Several studies have checked the validity of Web-based surveys by comparing the results of studies conducted on the Web with identical studies in the real world. These seem to suggest that the validity and reliability of data obtained online are comparable to those obtained by classical methods [4,5,17-19]. However, issues of generalizability (mainly due to selection bias, discussed in detail below) remain important considerations, and the researcher should select his or her research question and interpret the results with care.The benefits and problems of Web-based surveys have been summarized by Wyatt, who suggests guidelines for when they may be appropriate (see Box 1) [20].

Guidelines for Web-based surveys

Scenarios that may be suitable for a Web-based survey

Respondent features:

  • Respondents are already avid Internet users; e-mail addresses known for reminder messages.
  • Respondents are enthusiastic form fillers; will not require monetary incentives.
  • Need for respondents covering a wide geographical area (e.g. rare clinical special ties, diseases).
  • Respondents are known to match non-respondents and even non-Internet users on key variables.

Survey features:

  • Need for complex branching, interactive questionnaire or multimedia as part of the survey instrument.
  • Survey content will evolve fast (e.g. Delphi method surveys use repeating rounds of revised questionnaires delivered over a short period, incorporating aggregate results from previous rounds until convergence is achieved).
  • Intent is to document bizarre, rare phenomena whose simple occurrence is of interest.
  • No need for representative results: collecting ideas vs. hypothesis testing.

Investigator features:

  • Limited budget for mailing and data processing, but good in-house Web skills.
  • Precautions can be taken against multiple responses by same individual, password sharing.
  • Web survey forms have been piloted with representative participants and demonstrate acceptable validity and reliability with most platform, browser, and Internet access provider combinations.
  • Data is required fast in a readily analysed form.

Scenarios that are unsuitable for a Web-based survey

Respondent features:

  • Target group is under-represented on Internet; e.g. the underprivileged, elderly people.
  • Target group is concerned, however unreasonably, about privacy aspects.
  • Target group requires substantial incentives to complete the survey.
  • Need for a representative sample.

Survey features:

  • Need for very accurate timing data on participants (inaccuracies in the range of seconds are added due to network transmission times, unless JavaScript or Java applets are used; see Glossary) or observational data on participants.
  • An existing paper instrument has been carefully validated on target group.
  • Need to capture qualitative data or observations about participants.
  • Wish to reach the same group of participants in the same way months or years later.

Investigator features:

  • Limited in-house Web or Java expertise but existing desktop publishing and mailing facility.

Selection bias

In `open' surveys conducted via the Internet where Web users, newsgroup readers, or mailing list subscribers are invited to participate by completing a questionnaire, selection bias is a major factor limiting the generalizability (external validity) of results. Selection bias occurs due to:

  • The non-representative nature of the Internet population.
  • The self-selection of participants, i.e. the non-representative nature of respondents, also called the `volunteer effect' [21].

The non-representative nature of Internet demographics was briefly considered earlier. Considering whether the topic chosen for study is suitable for the Internet population is the first and probably the most important step in minimizing bias, thus maximizing response rates and increasing the external validity of the results [20]. For example, targeting elderly homeless alcoholics is unsuitable for an Internet survey and the results are likely to be heavily skewed by hoax responses.

Self-selection bias originates from the fact that people are more likely to respond to questionnaires if they see items which interest them, e.g. because they are affected by the items asked about, or because they are attracted by the incentives offered for participating. As people who respond almost certainly have different characteristics than those who do not, the results are likely to be biased.This kind of selection bias is more serious than the bias arising from the non-representative nature of the population, because the researcher deals with a myriad of unknown factors and has little opportunity to interpret his or her results accordingly. Such bias may be exacerbated via loaded incentives (e.g. typical `male' incentives such as computer equipment). Evidence suggests women are generally more interested in health topics and exhibit more active information-seeking behaviour [22], so are more likely to volunteer participation in health questionnaires. For Web surveys, the potential for self-selection bias can be estimated by measuring the response rate, expressed as the number of people completing the questionnaire divided by those who viewed it (cf. the participation rate, expressed as the number of site visitors viewing the questionnaire divided by the total number of site visitors).

Technical issues

Although a detailed analysis is beyond the scope of this chapter, a synopsis of important techniques and tips for implementing Web-based surveys provides some insight into the difficulties faced by survey designers (see Box 2).

Technical issues in implementing Web-based surveys

Use of `cookies'

Cookies can assign a unique identifier to every questionnaire viewer, useful for determining response and participation rates, and for filtering out multiple responses by the same person. As cookies may be regarded with suspicion, we recommend that researchers openly state that cookies will be sent (and the reasons for this); set the cookie to expire on the day that data collection ceases; and publish a privacy policy.

Measuring response time

The time needed to complete a questionnaire can be readily calculated by subtracting the time a form was called up by the browser from the time it was submitted using an automatic time-stamp. The response time may be used to exclude respondents who fill in the questionnaire too quickly: this may identify hoax responses, where respondents don't read the questions.

Avoiding missing data

Forms can be configured to automatically reject incomplete questionnaires and point out missing or contradictory items. Checks can be made on the client (p. 9) prior to submission, or following submission to the server (where incomplete responses can also be analysed, e.g. during a questionnaire pilot).

Maximizing response rate

The number of contacts, personalized contacts, and contact with participants before the actual survey are the factors most associated with higher response rates in Web surveys [23]. Incentives increase the risk of selection bias (see text), but less so if cash is offered. Perhaps the best incentive (and the easiest to deliver via the Internet) is the promise of survey results or personalized answers (e.g. a score). The option to complete questionnaires anonymously avoids wariness associated with requests for personal information (e.g. an e-mail address), but increases the risk of hoax responses. Researchers should be open about who is behind the study, what the aim is, and provide opportunities for feedback. Although postal surveys are superior to e-mail surveys with regard to response rate, online surveys are much cheaper [24,25]. Schleyer [15] estimated that the cost of their Web-based survey was 38 percent less than that of an equivalent mail survey and presented a general formula for calculating break-even points between electronic and hard-copy surveys. Jones gave figures of 92 p per reply for postal surveys, 35 p for e-mail, and 41 p for the Web [24].

Randomizing items

Scripting languages may be used to build dynamic questionnaires (as opposed to static forms) that look different for certain user groups or which randomize certain aspects of the questionnaire (e.g. the order of the items). This can be useful to exclude possible systematic influences of the order of the items upon responses.

Ethical issues

The ethical issues involved in any type of online research should not be forgotten [1,26-31]. These include informed consent as a basic ethical tenet of scientific research on human populations [32], protection of privacy, and avoiding psychological harm.

In qualitative research on the Web, informed consent is required when:

  • Data are collected from research participants through any form of communication, interaction, or intervention.
  • Behaviour of research participants occurs in a private context where an individual can reasonably expect that no observation or reporting is taking place, except when researchers do research `in public places or use publicly available information about individuals (e.g. naturalistic observations in public places, analysis of public records, or archival research)' [33].

The question therefore arises of whether researchers analysing newsgroup postings enter a `public place', or whether the space they invade is perceived as private. In the context of research, the expectation of the individual (whether he/she can reasonably expect that no observation is taking place) is crucial. Different Internet services have different levels of perceived privacy (in decreasing order of privacy: private e-mail; chat rooms; mailing lists; newsgroups;Web sites).The perceived level of privacy is a function of the number of participants, but also depends on other factors such as group norms established by the community to be studied. For example, in a controversial paper, Finn studied a virtual self-support group where the moderator was actively discouraging interested professionals who were not sexual abuse survivors from joining the group [34]. In those cases, obtaining informed consent (or seeking an ethical waiver, if the research could not practicably be carried out were informed consent to be required) is mandatory.

In practice, obtaining informed consent, especially for passive research methods, is difficult, as researchers usually cannot post an announcement to a mailing list or newsgroup saying that it will be monitored and analysed for the next few months, as this may greatly influence or even spoil the results, and because the mere posting of such a request may disrupt the community, and therefore be considered unethical. Researchers should therefore first obtain consent from a group moderator in order to explore whether even a request for permission is felt to be disruptive to the group process. If the moderator or person responsible for the list has no objections, one may then post a message to a newsgroup or mailing list explaining the purpose of the research, explaining that one will observe the community, assuring all participants of anonymity, and giving them the opportunity to withdraw from the newsgroup or mailing list or to exclude themselves from the study by writing to the researcher. The fundamental problem is that this may influence the communication process and may even destroy the community. Besides, participants who later join the group need to get the same information. An alternative would be to analyse the communication retrospectively and to write individual e-mails to all participants whose comments were to be analysed or quoted, asking for permission to use them; this technique has been used by Sharf [35].

In any case, researchers should make themselves familiar with the virtual community they are approaching; i.e. read the messages in a newsgroup for some time (`lurking'). Under no circumstances should researchers blindly spam (p. 31) or cross-post requests for research participation to various newsgroups.

Informed consent may also play a role when researchers report aggregate (collated and hence anonymous) data on usage patterns, such as a log-file analysis (reporting data on what Web sites have been accessed by a population). Crucial here is an appropriate privacy statement stating that these data may be analysed and reported in aggregate [28]. Note that aggregate data are exempt from the registration requirements of the UK's Data Protection Act of 1998.

In conducting surveys researchers may obtain informed consent by declaring the purpose of the study; disclosing which institutions are behind the study; explaining how privacy will be assured; and detailing with whom data will be shared and how it will be reported, before participants complete the questionnaire.

When reporting results, it is obvious that the total anonymity of research participants needs to be maintained. Researchers have to keep in mind that, by the very process of quoting the exact words of a newsgroup or mailing list participant, the confidentiality of the participant may already be broken as Internet search engines may be able to retrieve the original message, including the e-mail address of the sender. It is essential, therefore, to ask participants whether they agree to be quoted whenever there may be a retrievable archive, pointing out the risk that they may be identifiable. Problems can also potentially arise from just citing the name of the community (e.g. of a newsgroup), which may damage the community being studied.

Finding methods, protocols, and instruments

For laboratory `bench work', researchers often need a protocol for a specific assay method. In addition to the possibility of searching literature databases, there are also specialized services on the Web that can assist in this research, such as MethodsFinder and the `Technical tips online' database at BioMedNet:

Sometimes asking a specific question on the right newsgroup or mailing list is also very effective. Clinical researchers may be more interested in instruments to measure patient outcomes.An excellent guide to selecting quality-of-life instruments is the Quality of Life Instruments Database at the Mapi Research Institute: http://www.qolid.org/

Online statistical analysis tools are available at the Simple Interactive Statistical Analysis (SISA) Web site, while background information is available within the online book Statistics at square one:

Protocols of clinical trials, which may be useful for researchers developing their own protocols, can be found in some of the clinical trial databases available on the Web, as described below.

Clinical trials and the Web

The Web is being used to assist in the identification and conduction of clinical trials.

Identifying trials

To prevent unintended duplication of clinical research, detect underreporting of research, and ease the work of systemic reviewing, it has been suggested that we should prospectively register clinical trials [36-39]. It is, however, unlikely that there will ever be one complete centralized multinational database. Instead, multiple resources set up by numerous different organizations will exist [40]. Internet technology will play a central role in linking these databases and making this information available to researchers and patients. Some scenarios in which a search of trial databases may be useful:

  • A researcher wants to conduct a randomized controlled trial and wants to know whether anyone else is already running one on the same topic.
  • A physician has a patient who is asking about available trials.
  • A patient is looking for ongoing trials.
  • A researcher is looking for possible participants for his trial.
  • A researcher doing a systematic review is looking for unpublished trials.

Information about ongoing and completed clinical trials is increasingly being published on the Internet, and searches on the Web may be a useful means of complementing traditional bibliographic searches if authors of systematic reviews wish to find ongoing or unpublished trials [41].

Researchers use their personal or department home pages to announce their interest in a certain research area or to recruit patients [42]. Journals like The Lancet have begun to publish research protocols on their Web site [43], and more and more researchers will also publish `pre-prints' (p. 239) of their findings on the Web [44].

Consumers and patient organizations also have an interest in disseminating information about ongoing trials; e.g. the National Alliance of Breast Cancer Organizations: http://www.nabco.org/

Government and funding agencies react to this need by establishing trial databases for consumers; e.g. the US National Institutes of Health searchable database [45]: http://ClinicalTrials.gov

Commercial enterprises also help researchers to recruit patients, or help patients to find clinical trials. For example:

Pharmaceutical companies and industry associations have likewise begun to recognize that openness and access to information on clinical trials and new drug developments can improve patient care and are part of social responsibility [46]. For example:

Finally, information or databases on ongoing clinical trials can often also be found on disease-specific sites. For example:

Conducting trials on the Web

The Web is increasingly being used in the course of conducting large-scale multi-centre clinical trials (e.g. for remote randomization and data entry), and in the distribution of information on trial progress or protocols [47,48]. Trial centres may enter patient data using Java applets (see Glossary) that encrypt data and send it to the data centre via the Internet [49-52], where the data are stored and randomized, returning for example a study number and randomization information.

Pre-publishing and publishing research

Traditional publication is a well-defined event, whereas `publication' in the electronic age is much more of a continuum [53], reflecting and occurring during the entire research process from hypothesis formulation to data gathering, interpretation, and the presentation and discussion of the final results. In order to distinguish online collaborative `work in progress' from `final' peer-reviewed publication we may term the former `Type 1' and the latter `Type 2' electronic publication [54]. Here, peer review is not the distinguishing characteristic: in Type 1 publication a `post-publication' peer review process takes place. Type 2 publication will ordinarily take place in online journals. The following scenarios illustrate how researchers might use Type 1 electronic publication on the Internet:

  • Sending and discussing preliminary results on mailing lists.
  • Publishing drafts of scientific papers on pre-print/e-print sites (p. 239) in order to solicit comments and to improve the manuscript.
  • Publishing data and information in databases; e.g. nucleotide sequences in the EMBL/Genbank databases.
  • Publishing clinical trial protocols and raw data in a `trial bank' [55].

Current awareness services

Electronic editions of paper journals and `stand alone' e-journals typically offer subscriptions to `TOC alerts', where users receive a table of contents by e-mail as soon as a new issue appears.The more sophisticated systems allow users to specify their interests using a controlled vocabulary, enabling the system to screen each newly published article for certain keywords or citations. Examples of current awareness services include:

Acknowledgments

This paper was originally published as a book chapter, in: Bruce c. McKenzie (ed.). Medicine and the Internet, Third Edition Oxford University Publishing, 2002 URL: http://www.oup.co.uk/isbn/0-19-851063-2 Reprinted with kind permission of the publisher.

Footnotes

Conflicts of Interest:

None declared.

References

1. Eysenbach G, Till J E. Ethical issues in qualitative research on internet communities. BMJ. 2001 Nov 10;323(7321):1103–5. http://bmj.com/cgi/pmidlookup?view=long&pmid=11701577. [PMC free article] [PubMed]
2. Eysenbach Gunther, Powell John, Kuss Oliver, Sa Eun-Ryoung. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. JAMA. 2002 May 22;287(20):2691–700. doi: 10.1001/jama.287.20.2691.jrv10005 [PubMed] [Cross Ref]
3. Greenhalgh T, Taylor R. Papers that go beyond numbers (qualitative research) BMJ. 1997 Sep 20;315(7110):740–3. http://bmj.com/cgi/pmidlookup?view=long&pmid=9314762. [PMC free article] [PubMed]
4. Buchanan T, Smith J L. Using the Internet for psychological research: personality testing on the World Wide Web. Br J Psychol. 1999 Feb;90 ( Pt 1)(1):125–44. doi: 10.1348/000712699161189. [PubMed] [Cross Ref]
5. Buchanan T, Smith JL. Research on the Internet: validation of a World-Wide Web mediated personality scale. Behavioural Research Methods in Instrumental Computing. 1999;31:565–71. [PubMed]
6. Schmidt WC. World-Wide Web survey research: benefits, potential problems, and solutions. Behavioural Research Methods in Instrumental Computing. (29):274–9.
7. Pealer LN, Weiler RM. Web-based health survey research: a primer. American Journal of Health Behavior. 2000;24:69–72.
8. Zhang Y. Using the Internet for survey research: a case study. Journal of the American Society of Informatic Sciences. 2000;(51):57–68. doi: 10.1002/(SICI)1097-4571(2000)51:1<57::AID-ASI9>3.0.CO;2-W. [Cross Ref]
9. Lazar J, Preece J. Designing and implementing Web-based surveys. Journal of Computer Information Systems. 1999;39:63–7.
10. Kaye BK, Johnson TJ. Research methodology:Taming the cyber frontier: techniques for improving online surveys. Social Science Computer Review. 1999;17:323–37.
11. Soetikno R M, Mrad R, Pao V, Lenert L A. Quality-of-life research on the Internet: feasibility and potential biases in patients with ulcerative colitis. J Am Med Inform Assoc. 1997;4(6):426–35. [PMC free article] [PubMed]
12. Eysenbach G, Diepgen T L. Epidemiological data can be gathered with world wide web. BMJ. 1998 Jan 3;316(7124):72. http://bmj.com/cgi/pmidlookup?view=long&pmid=9451290. [PMC free article] [PubMed]
13. Hilsden R J, Meddings J B, Verhoef M J. Complementary and alternative medicine use by patients with inflammatory bowel disease: An Internet survey. Can J Gastroenterol. 1999 May;13(4):327–32. [PubMed]
14. Schleyer TK, Forrest JL, Kenney R, Dodell DS, Dovgy NA. Is the Internet useful for clinical practice. Journal of the American Dental Association. 1999;130:1501–11. [PubMed]
15. Schleyer T K, Forrest J L. Methods for the design and administration of web-based surveys. J Am Med Inform Assoc. 2000;7(4):416–25. [PMC free article] [PubMed]
16. Suchard MA, Adamson S, Kennedy S. Netpoints: piloting patient attitudinal surveys on the web. British Medical Journal. 1997;315:529.
17. Nathanson AT, Reinert SE. Windsurfing injuries: results of a paper- and Internet-based survey. Wilderness & Environmental Medicine. 1999;10:218–25. [PubMed]
18. Senior C, Phillips ML, Barnes J, David AS. An investigation into the perception of dominance from schematic faces: a study using the World-Wide Web. Behavioural Research Methods in Instrumental Computing. 1999;31:341–6. [PubMed]
19. Krantz JH, Ballard J, Scher J. Comparing the results of laboratory and World-Wide Web samples on the determinants of female attractiveness. Behavioural Research Methods in Instrumental Computing. 1997;29:264–9.
20. Wyatt J C. When to use web-based surveys. J Am Med Inform Assoc. 2000;7(4):426–9. [PMC free article] [PubMed]
21. Friedman Charles P, Wyatt Jeremy C, Smith AC, Kaplan B. Evaluation Methods in Medical Informatics. New York: Springer; 1997. Jan 15,
22. Fox S, Rainie L. The online health care revolution: how the Web helps Americans take better care of themselves. Washington: The Pew Internet & American Life Project; 2000. [2001 Sep 20]. http://www.pewinternet.org/reports/toc.asp?Report=26.
23. Cook C, Heath F, Thompson RL. A meta-analysis of response rates in Web- or internet-based surveys. Educational and Psychological Measurement. 2000;60(6):821–36. doi: 10.1177/00131640021970934. [Cross Ref]
24. Jones R, Pitt N. Health surveys in the workplace: comparison of postal, e-mail and World Wide Web methods. Occupational Medicine (London) 1999;49:556–8. [PubMed]
25. Mavis BE, Brocato JJ. Postal surveys versus electronic mail surveys. The tortoise and the hare revisited. Evaluating Health Professions. 1998;(21):395–408. [PubMed]
26. Polzer JC. Using the Internet to conduct qualitative health research methodological and ethical issues [dissertation] University of Toronto; 1998.
27. Cho H, LaRose R. Privacy issues in Internet surveys. Social Science Computer Review. 1999;17:421–34.
28. Thomas J. The ethics of Carniegie Mellon's `cyber-porn' study. 1995. [2001 Jan 12]. http://sun.soci.niu.edu/~jthomas/ethics.cmu.
29. Till JE. Research ethics: Internet-based research. Part 1: on-line survey research. 1997. [2001 Jan 12]. http://members.tripod.com/~ca916/index-3.html.
30. King SA. Researching Internet communities: proposed ethical guidelines for the reporting of results. The Information Society. 1996;12:119–28.
31. Karlinsky H. Internet survey research and consent. M.D. Computing. 1998;15:285. [PubMed]
32. World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects (as amended 2000 Oct) 2000. [2001 Jan 12]. http://www.wma.net/e/policy/17-c_e.html.
33. American Sociological Association.American Sociological Association's Code of Ethics. 1997. [2001 Jan 12]. http://www.asanet.org/members/ecoderev.html.
34. Finn J. An exploration of helping processes in an online self-help group focusing on issues of disability. Health and Social Work. 1999;24:220–31. [PubMed]
35. Sharf BF. Communicating breast cancer on-line: support and empowerment on the Internet. Women and Health. 1997;26:65–84. [PubMed]
36. Simes R J. Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986 Oct;4(10):1529–41. [PubMed]
37. Chalmers I, Dickersin K. Getting to grips with Archie Cochrane's agenda. BMJ. 1992 Oct 3;305(6857):786–8. [PMC free article] [PubMed]
38. Chalmers I, Gray M, Sheldon T. Handling scientific fraud. Prospective registration of health care research would help. BMJ. 1995 Jul 22;311(6999):262. [PMC free article] [PubMed]
39. Horton R, Smith R. Time to register randomised trials. The case is now unanswerable. BMJ. 1999 Oct 2;319(7214):865–6. http://bmj.com/cgi/pmidlookup?view=long&pmid=10506022. [PMC free article] [PubMed]
40. Tonks A. Registering clinical trials. BMJ. 1999 Dec 11;319(7224):1565–8. http://bmj.com/cgi/pmidlookup?view=long&pmid=10591727. [PMC free article] [PubMed]
41. Eysenbach G, Tuische J, Diepgen T L. Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews. Med Inform Internet Med. 2001;26(3):203–18. doi: 10.1080/14639230110075459. [PubMed] [Cross Ref]
42. Wilmoth M C. Computer networks as a source of research subjects. West J Nurs Res. 1995 Jun;17(3):335–8. [PubMed]
43. Chalmers I, Altman D G. How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet. 1999 Feb 6;353(9151):490–3. doi: 10.1016/S0140-6736(98)07618-1.S0140673698076181 [PubMed] [Cross Ref]
44. Delamothe T, Smith R, Keller M A, Sack J, Witscher B. Netprints: the next phase in the evolution of biomedical publishing. BMJ. 1999 Dec 11;319(7224):1515–6. http://bmj.com/cgi/pmidlookup?view=long&pmid=10591693. [PMC free article] [PubMed]
45. Mccray A T, Ide N C. Design and implementation of a national clinical trials registry. J Am Med Inform Assoc. 2000;7(3):313–23. [PMC free article] [PubMed]
46. Sykes R. Being a modern pharmaceutical company: involves making information available on clinical trial programmes. BMJ. 1998 Oct 31;317(7167):1172. http://bmj.com/cgi/pmidlookup?view=long&pmid=9794848. [PMC free article] [PubMed]
47. Santoro E, Nicolis E, Franzosi M G, Tognoni G. Internet for clinical trials: past, present, and future. Control Clin Trials. 1999 Apr;20(2):194–201. doi: 10.1016/S0197-2456(98)00060-9.S0197245698000609 [PubMed] [Cross Ref]
48. Kelly MA, Oldham J. The Internet and randomised controlled trials. International Journal of Medical Informatics. 1999;47(1-2):91–9. doi: 10.1016/S1386-5056(97)00091-9. [PubMed] [Cross Ref]
49. Sippel H, Eich H P, Ohmann C. Data collection in multi-center clinical trials via Internet. A generic system in Java. Medinfo. 1998;9 Pt 1(1):93–7. [PubMed]
50. Sippel H, Ohmann C. .A web-based data collection system for clinical studies using Java. Medical Informatics (London) 1998;23:223–9. [PubMed]
51. Eich HP, Ohmann C. Generalisation and extension of a web-based data collection system for clinical studies using Java and CORBA. Studies in Health Technology Information. 1999;68:568–72. [PubMed]
52. Keim E, Sippel H, Eich HP, Ohmann C. Collection of data in clinical studies via Internet. Studies in Health Technology Information. 1997;43(A):57–60. [PubMed]
53. Smith R. What is publication? BMJ. 1999 Jan 16;318(7177):142. http://bmj.com/cgi/pmidlookup?view=long&pmid=9888887. [PMC free article] [PubMed]
54. Eysenbach G. Challenges and changing roles for medical journals in the cyberspace age: electronic pre-prints and e-papers. J Med Internet Res. 1999 Dec;1(2):e9. doi: 10.2196/jmir.1.2.e9. http://www.jmir.org/1999/2/e9/ [PMC free article] [PubMed] [Cross Ref]
55. Sim I, Owens D K, Lavori P W, Rennels G D. Electronic trial banks: a complementary method for reporting randomized trials. Med Decis Making. 2000;20(4):440–50. [PubMed]

Articles from Journal of Medical Internet Research are provided here courtesy of Gunther Eysenbach