Format

Send to

Choose Destination
Drug Saf. 2019 Jun;42(6):721-725. doi: 10.1007/s40264-018-00794-y.

A Machine-Learning Algorithm to Optimise Automated Adverse Drug Reaction Detection from Clinical Coding.

Author information

1
Department of Clinical Pharmacology, Austin Health, Level 5, Lance Townsend Building, Studley Rd, Heidelberg, VIC, 3084, Australia. christopher.mcmaster@austin.org.au.
2
Department of Medicine, University of Melbourne, Parkville, VIC, Australia. christopher.mcmaster@austin.org.au.
3
Department of Clinical Pharmacology, Austin Health, Level 5, Lance Townsend Building, Studley Rd, Heidelberg, VIC, 3084, Australia.
4
Department of Medicine, University of Melbourne, Parkville, VIC, Australia.
5
Department of Pharmacy, Austin Health, Heidelberg, VIC, Australia.

Abstract

INTRODUCTION:

Adverse drug reaction (ADR) detection in hospitals is heavily reliant on spontaneous reporting by clinical staff, with studies in the literature pointing to high rates of underreporting [1]. International Classification of Diseases, 10th Revision (ICD-10) codes have been used in epidemiological studies of ADRs and offer the potential for automated ADR detection systems.

OBJECTIVE:

The aim of this study was to develop an automated ADR detection system based on ICD-10 codes, using machine-learning algorithms to improve accuracy and efficiency.

METHODS:

For a 12-month period from December 2016 to November 2017, every inpatient episode receiving an ICD-10 code in the range Y40.0-Y59.9 (ADR code) was flagged for review as a potential ADR. Each flagged admission was assessed by an expert pharmacist and, if needed, reviewed at regular ADR committee meetings. For each report, a determination was made about ADR probability and severity. The dataset was randomly split into training and test sets. A machine-learning model using the random forest algorithm was developed on the training set to discriminate between true and false ADR reports. The model was then applied to the test set to assess accuracy using the area under the receiver operating characteristic (AUC).

RESULTS:

In the study period, 2917 Y40.0-Y59.9 codes were applied to admissions, resulting in 245 ADR reports after review. These 245 reports accounted for 44.5% of all ADR reporting in our hospital in the study period. A random forest model built on the training set was able to discriminate between true and false reports on the test set with an AUC of 0.803.

CONCLUSIONS:

Automated ADR detection using ICD-10 coding significantly improved ADR detection in the study period, with improved discrimination between true and false reports by applying a machine-learning model.

PMID:
30725336
DOI:
10.1007/s40264-018-00794-y

Supplemental Content

Full text links

Icon for Springer
Loading ...
Support Center