Format

Send to

Choose Destination
Appl Clin Inform. 2015 May 20;6(2):334-44. doi: 10.4338/ACI-2015-01-RA-0010. eCollection 2015.

Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

Author information

1
Center for Applied Health Services Research, Ochsner Health System , New Orleans, LA ; Department of Biostatistics and Bioinformatics, Tulane University School of Public Health and Tropical Medicine , New Orleans, LA.
2
Department of Medicine, Brigham and Women's Hospital , Boston, MA ; Department of Clinical Informatics Research and Development, Partners HealthCare , Boston, MA ; Harvard Medical School , Boston, MA.
3
Center for Applied Health Services Research, Ochsner Health System , New Orleans, LA ; Department of Medicine, Tulane University School of Medicine , New Orleans, LA ; Department of Epidemiology, Tulane University School of Public Health and Tropical Medicine , New Orleans, LA.
4
Department of Internal Medicine, The University of Texas Medical School at Houston , Houston, TX ; The University of Texas at Houston-Memorial Hermann Center for Healthcare Quality and Safety , Houston, TX.
5
Department of Urology, Ochsner Health System , New Orleans, LA.
6
The University of Texas at Houston-Memorial Hermann Center for Healthcare Quality and Safety , Houston, TX ; The University of Texas School of Biomedical Informatics at Houston , Houston, TX.

Abstract

BACKGROUND:

Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging.

OBJECTIVE:

We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record.

METHODS:

We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision.

RESULTS:

The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%.

CONCLUSIONS:

We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

KEYWORDS:

Crowdsourcing; computer-assisted drug therapy; electronic health records; knowledge bases; problem-oriented medical records; validation studies

PMID:
26171079
PMCID:
PMC4493334
DOI:
10.4338/ACI-2015-01-RA-0010
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Georg Thieme Verlag Stuttgart, New York Icon for PubMed Central
Loading ...
Support Center