Format

Send to

Choose Destination
See comment in PubMed Commons below
J Am Med Inform Assoc. 2017 May 1;24(3):481-487. doi: 10.1093/jamia/ocw140.

Toward automated assessment of health Web page quality using the DISCERN instrument.

Author information

1
Department of Pathology, Yale University School of Medicine, New Haven, CT, USA.
2
Institute of Communication and Health, Faculty of Communication Sciences, University of Lugano (Università della Svizzera Italiana), Lugano, Switzerland.
3
Program for Computational Biology and Bioinformatics, Department of Pathology, Yale University School of Medicine, New Haven, CT, USA.

Abstract

Background:

As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers.

Objective:

The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms.

Methods:

Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria.

Results:

First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers.

Conclusion:

Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers.

Availability:

The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ .

KEYWORDS:

DISCERN; consensus model; health information quality; multicriteria instrument

PMID:
27707819
DOI:
10.1093/jamia/ocw140
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Silverchair Information Systems
    Loading ...
    Support Center