Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed?

Int J Med Inform. 2005 Aug;74(7-8):675-83. doi: 10.1016/j.ijmedinf.2005.02.002. Epub 2005 Mar 24.

Abstract

Purpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites.

Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK).

Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement > or = 0.6).

Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.

Publication types

  • Comparative Study
  • Evaluation Study
  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Complementary Therapies
  • Humans
  • Internet*
  • Medical Informatics*
  • Quality Control*