• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of bmjBMJ helping doctors make better decisionsSearch bmj.comLatest content
BMJ. Mar 9, 2002; 324(7337): 569–573.
PMCID: PMC78993

Examination of instruments used to rate quality of health information on the internet: chronicle of a voyage with an unclear destination

Anna Gagliardi, senior research associatea and Alejandro R Jadad, professorb

Abstract

Objective

This study updates work published in 1998, which found that of 47 rating instruments appearing on websites offering health information, 14 described how they were developed, five provided instructions for use, and none reported the interobserver reliability and construct validity of the measurements.

Design

All rating instrument sites noted in the original study were visited to ascertain whether they were still operating. New rating instruments were identified by duplicating and enhancing the comprehensive search of the internet and the medical and information science literature used in the previous study. Eligible instruments were evaluated as in the original study.

Results

98 instruments used to assess the quality of websites in the past five years were identified. Many of the rating instruments identified in the original study were no longer available. Of 51 newly identified rating instruments, only five provided some information by which they could be evaluated. As with the six sites identified in the original study that remained available, none of these five instruments seemed to have been validated.

Conclusions

Many incompletely developed rating instruments continue to appear on websites providing health information, even when the organisations that gave rise to those instruments no longer exist. Many researchers, organisations, and website developers are exploring alternative ways of helping people to find and use high quality information available on the internet. Whether they are needed or sustainable and whether they make a difference remain to be shown.

What is already known on this topic

The rapid growth of healthcare websites in the 1990s was accompanied by initiatives to rate their quality, including award-like symbols on websites

A systematic review of the reliability and validity of such rating instruments, published in 1998, showed that they were incompletely developed

What this study adds

Few of the rating instruments identified in 1998 remain functional; 51 new instruments were identified

Of the 51 newly identified instruments, 11 were not functional, 35 were available but provided no information, and five provided information but were not validated

Many researchers, organisations, and website developers are exploring alternative ways of helping people to find high quality information on the internet

Introduction

The quality of health information on the internet became a subject of interest to healthcare professionals, information specialists, and consumers of health care in the mid-1990s. Along with the rapid growth of healthcare websites came a number of initiatives, both academic and commercial, that generated criteria by which to ensure, judge, or denote the quality of websites offering health information. Some of these rating instruments took the form of logos resembling “awards” or “seals of approval” and appeared prominently on the websites on which they were bestowed.

In 1997 we undertook a review of “award-like” internet rating instruments in an effort to assess their reliability and validity.1 We hypothesised that if the rating instruments were flawed they might influence healthcare providers or consumers relying on them as indicators of accurate information. Instruments were eligible for review if they had been used at least once to categorise a website offering health information and revealed the rating criteria by which they did so. The rating instruments were evaluated according to, firstly, a system for judging the rigour of the development of tools to assess the quality of randomised controlled trials2 and, secondly, whether their criteria included three indicators suggested as appropriate for judging the quality of website content.3,4 These indicators were authorship (information about authors and their contributions, affiliations, and relevant credentials), attribution (listing of references or sources of content), and disclosure (a description of website ownership, sponsorship, underwriting, commercial funding arrangements, or potential conflicts of interest). These criteria were selected for use in the original study because they could be rated objectively.

Our original study found that of 47 rating instruments identified, 14 described how they were developed, five provided instructions for use, and none reported the interobserver reliability and construct validity of the measurements. The review showed that many incompletely developed instruments were being used to evaluate or draw attention to health information on the internet.

The purpose of this study is to update the previous review of award-like rating instruments for the evaluation of websites providing health information and to describe any changes that may have taken place in the development of websites offering health information to practitioners and consumers with respect to the quality of their content.

Methods

We visited the websites describing each of the rating instruments noted in the original study to ascertain whether they were still operating. If internet service was disrupted for technical reasons or if sites were not available on first visit, we attempted a connection on one further occasion.

The search strategies, inclusion and exclusion criteria, and techniques for data extraction were similar to those used in the original review.1 We used the following sources to identify new rating instruments:

  • A search to 7 September 2001 of Medline, CINAHL, and HealthSTAR (from December 1997) using [(top or rat: or rank: or best) and (internet or web) and (quality or reliab: or valid:)]
  • A search of the databases Information Science Abstracts, Library and Information Science Abstracts (1995 to September 2001), and Library Literature (1996 to September 2001) using [(rat: or rank: or top or best) and (internet or web or site) and (health:)]
  • A search of the American Medical Informatics Association's 1998, 1999, 2000, and 2001 annual symposium programmes (www.amia.org) for mention of health information on the internet

  • A search of the Journal of Medical Internet Research (September 1999 to September 2001) for mention of evaluations of the quality of health information on the internet (www.jmir.org)
  • A search of the online archive of the magazine Internet World (www.internetworld.com) (January 2000 to September 2001) for mention of health information on the internet.

We also reviewed relevant articles referenced in identified studies and links available on identified websites. We did not search the discussion list Public Communication of Science and Technology, which was consulted in the original study.

We stopped searching for rating instruments on 22 September 2001. Rating instruments were eligible for inclusion in the review if it was possible to link from their award-like symbol to an available website describing the criteria used by an individual or organisation to judge the quality of websites on which the award was bestowed. We excluded rating instruments from review if they were used only to rate sites offering non-health information or did not provide any description of their rating criteria. In contrast to the initial study, we did not contact the developers of rating instruments to request information about their criteria if it was not publicly available on their website.

We identified the website, group, or organisation that developed each eligible rating instrument, along with its web address. The two authors independently evaluated each rating instrument according to its validity (number of items in the instrument, availability of rating instructions, information on the development of rating criteria, and evaluation of interobserver reliability) and incorporation of the proposed criteria for evaluation of internet sites: authorship, attribution, and disclosure.24

Results

Fourteen rating instruments identified in the original study provided a description of their rating criteria and were therefore eligible for review. Six of these continued to function. Of the remaining eight instruments, four were no longer in operation and four had converted to a directory format. Table Table11 summarises the review of the six functioning instruments. Our evaluation of one of these instruments, OncoLink's editors' choice awards, differed from that in the original study because the organisation does not provide information about the instrument on its website.

Table 1
Summary of criteria for rating instruments

Of the 33 rating instruments identified in the original study that were not eligible for review, three continued to function. These were Best Medical Resources on the Web (priory.com/other.htm), Dr Webster's website of the day (drWebster.com), and HealthSeek quality site award (healthseek.com). None of these rating instruments revealed its rating criteria, and they therefore remained ineligible for review. Of the remaining rating instrument websites, 10 were no longer in operation, five had been subsumed by or merged with another organisation and had a different name or purpose, and 15 still offered a website but did not function as a rating instrument.

We newly identified 51 rating instruments. Eleven of these were identified as award-like symbols on a website offering health information, but the website of the organisation from which they originated was no longer operating (table (table2).2). Of the remaining 40 rating instruments, 35 were associated with an active website but did not reveal the criteria by which they judge websites and were ineligible for evaluation (table (table3).3). Five award sites discussed their evaluation criteria and were assessed (table (table1).1). Although three of these five rating instruments exhibited one or more of the characteristics of authorship, attribution, and disclosure, none reported on the reliability and validity of the measurements or provided instructions on how to obtain the ratings.

Table 2
Newly identified award sites not available
Table 3
Newly identified available award sites not eligible for review

Discussion

During the past five years, we have identified a total of 98 different rating instruments that have been used to assess the quality of websites. Many of the rating instruments identified in the original study were no longer available. Fifty one additional rating instruments have been developed since 1997, and many of these had also stopped functioning. Of 51 newly identified rating instruments, only five provided some information by which they could be evaluated. As with the six rating instrument sites identified in the original study that remained available, none of these seems to have been validated. Many incompletely developed rating instruments continue to appear on websites providing health information, even when the organisations that gave rise to them no longer exist. Surprisingly, many of these rating instruments, of questionable utility and without association to an operable entity, are featured on the US Department of Health and Human Services Healthfinder website (www.healthfinder.gov/aboutus/awards.htm), which uses a detailed and rigorous selection process for the development of its own content.

Our initial questions remain unanswered. Is it desirable or necessary to assess the quality of health information on the internet? If so, is it an achievable goal given that quality is a construct for which we have no gold standard? Some effort has been made to identify whether the presence of rating instrument awards influences consumers of health information,5 but whether validated rating instruments would have an impact on the competence, performance, behaviour, and health outcomes of those who use them remains unclear.

Our search of the literature and the internet revealed that a large number of researchers, organisations, and website developers are exploring alternative ways to help people find and use high quality information available on the internet. Many reviews of healthcare information on the internet have been conducted, overall and for specific diseases or conditions.612 Examination of over 90 reviews concluded that the validity of health information available on websites is highly variable across different diseases and populations, and is in many cases potentially misleading or harmful (G Eysenbach, personal communication, 2001). Several organisations, including government and non-profit entities, have developed criteria by which to organise and identify valid health information (table (table4).4). Other groups, such as the OMNI Advisory Group for Evaluation Criteria (omni.ac.uk) and the Collaboration for Critical Appraisal of Information on the Net (www.medcertain.org), are refining technical mechanisms by which users of the internet can easily locate quality health information in a transparent manner based on evaluative meta-information labelling and indexing.1315 The impact of these efforts remains unclear.

Table 4
Initiatives to organise and identify valid health information on the internet

More recently, a European project recommended the accreditation of healthcare related software, telemedicine, and internet sites.16 They suggested a mechanism similar to the marking of electrical goods for software, that national regulatory bodies should be identified for telemedicine, and that a European certification of integrity scheme should be developed for websites. Citing the many impediments to voluntary quality assurance for websites, the authors suggest the development of criteria, modifiable according to the needs of special interest groups, that would be used by accredited agencies to self label conforming websites (not only those offering health information) with a EuroSeal. Monitoring of integrity would be ongoing through cryptographic techniques.

In conclusion, our updated study shows that award systems based on non-validated rating instruments continue to be produced but that most stop functioning soon after their release. Alternative strategies are now flourishing, and whether they are valid, needed, or sustainable and whether they make a difference is the subject of further research.

Footnotes

Funding: ARJ was supported by funds from the University Health Network, the Rose Family Chair in Supportive Care, and a Premier's Research Excellence Award from the Ministry of Energy, Science and Technology of Ontario.

Competing interests: None declared.

References

1. Jadad AR, Gagliardi A. Rating health information on the internet: navigating to knowledge or to Babel? JAMA. 1998;279:611–614. [PubMed]
2. Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S. Assessing the quality of randomized controlled trials: an annotated bibliography. Control Clin Trials. 1995;16:62–73. [PubMed]
3. Silberg WM, Lundberg GD, Musacchio RA. Assessing, controlling and assuring the quality of medical information on the internet. JAMA. 1997;277:1244–1245. [PubMed]
4. Wyatt JC. Measuring quality and impact of the world wide web [commentary] BMJ. 1997;314:1879–1881. [PMC free article] [PubMed]
5. Shon J, Marshall J, Musen MA. The impact of displayed awards on the credibility and retention of web site information. Proc AMIA Symp 2000:794-8. [PMC free article] [PubMed]
6. Berland GK, Elliott MN, Morales LS, Algazy JI, Kravitz RL, Broder MS, et al. Health information on the internet: accessibility, quality, and readability in English and Spanish. JAMA. 2001;285:2612–2621. [PubMed]
7. Li L, Irvin E, Guzman J, Bombardier C. Surfing for back pain patients: the nature and quality of back pain information on the internet. Spine. 2001;26:545–547. [PubMed]
8. Suarez-Almazor ME, Kendall CJ, Dorgan M. Surfing the net—information on the world wide web for persons with arthritis: patient empowerment or patient deceit? J Rheumatol. 2001;28:185–191. [PubMed]
9. Impiccatore P, Pandolfini C, Casella N, Bonati M. Reliability of health information for the public on the world wide web: systemic survey of advice on managing fever in children at home. BMJ. 1997;314:1875–1879. [PMC free article] [PubMed]
10. Griffiths KM, Christensen H. Quality of web based information on treatment of depression: cross sectional survey. BMJ. 2000;321:1511–1515. [PMC free article] [PubMed]
11. Abbott VP. Web page quality: can we measure it and what do we find? A report of exploratory findings. J Public Health Med. 2000;22:191–197. [PubMed]
12. Tamm EP, Raval BK, Huynh PT. Evaluation of the quality of self-education mammography material available for patients on the internet. Acad Radiol. 2000;7:137–141. [PubMed]
13. Eysenbach G, Diepgen TL. Towards quality management of medical information on the internet: evaluation, labelling, and filtering of information. BMJ. 1998;317:1496–1500. [PMC free article] [PubMed]
14. Eysenbach G, Diepgen TL. Labeling and filtering of medical information on the internet. Methods Inf Med. 1999;38:80–88. [PubMed]
15. Price SL, Hersh WR. Filtering web pages for quality indicators: an empirical approach to finding high quality consumer health information on the world wide web. Proc AMIA Symp 1999:911-5. [PMC free article] [PubMed]
16. Rigby M, Forsstrom J, Roberts R, Wyatt J. Verifying quality and safety in health informatics services. BMJ. 2001;323:552–556. [PMC free article] [PubMed]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...