Format

Send to

Choose Destination
J Biomed Inform. 2018 Nov;87:12-20. doi: 10.1016/j.jbi.2018.09.008. Epub 2018 Sep 12.

A comparison of word embeddings for the biomedical natural language processing.

Author information

1
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Wang.Yanshan@mayo.edu.
2
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Liu.Sijia@mayo.edu.
3
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Afzal.Naveed@mayo.edu.
4
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Mojarad.Majid@mayo.edu.
5
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Wang.Liwei@mayo.edu.
6
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Shen.Feichen@mayo.edu.
7
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Kingsbury.Paul1@mayo.edu.
8
Department of Health Sciences Research, Mayo Clinic, Rochester, USA. Electronic address: Liu.Hongfang@mayo.edu.

Abstract

BACKGROUND:

Word embeddings have been prevalently used in biomedical Natural Language Processing (NLP) applications due to the ability of the vector representations being able to capture useful semantic properties and linguistic relationships between words. Different textual resources (e.g., Wikipedia and biomedical literature corpus) have been utilized in biomedical NLP to train word embeddings and these word embeddings have been commonly leveraged as feature input to downstream machine learning models. However, there has been little work on evaluating the word embeddings trained from different textual resources.

METHODS:

In this study, we empirically evaluated word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. For the former two resources, we trained word embeddings using unstructured electronic health record (EHR) data available at Mayo Clinic and articles (MedLit) from PubMed Central, respectively. For the latter two resources, we used publicly available pre-trained word embeddings, GloVe and Google News. The evaluation was done qualitatively and quantitatively. For the qualitative evaluation, we randomly selected medical terms from three categories (i.e., disorder, symptom, and drug), and manually inspected the five most similar words computed by embeddings for each term. We also analyzed the word embeddings through a 2-dimensional visualization plot of 377 medical terms. For the quantitative evaluation, we conducted both intrinsic and extrinsic evaluation. For the intrinsic evaluation, we evaluated the word embeddings' ability to capture medical semantics by measruing the semantic similarity between medical terms using four published datasets: Pedersen's dataset, Hliaoutakis's dataset, MayoSRS, and UMNSRS. For the extrinsic evaluation, we applied word embeddings to multiple downstream biomedical NLP applications, including clinical information extraction (IE), biomedical information retrieval (IR), and relation extraction (RE), with data from shared tasks.

RESULTS:

The qualitative evaluation shows that the word embeddings trained from EHR and MedLit can find more similar medical terms than those trained from GloVe and Google News. The intrinsic quantitative evaluation verifies that the semantic similarity captured by the word embeddings trained from EHR is closer to human experts' judgments on all four tested datasets. The extrinsic quantitative evaluation shows that the word embeddings trained on EHR achieved the best F1 score of 0.900 for the clinical IE task; no word embeddings improved the performance for the biomedical IR task; and the word embeddings trained on Google News had the best overall F1 score of 0.790 for the RE task.

CONCLUSION:

Based on the evaluation results, we can draw the following conclusions. First, the word embeddings trained from EHR and MedLit can capture the semantics of medical terms better, and find semantically relevant medical terms closer to human experts' judgments than those trained from GloVe and Google News. Second, there does not exist a consistent global ranking of word embeddings for all downstream biomedical NLP applications. However, adding word embeddings as extra features will improve results on most downstream tasks. Finally, the word embeddings trained from the biomedical domain corpora do not necessarily have better performance than those trained from the general domain corpora for any downstream biomedical NLP task.

KEYWORDS:

Information extraction; Information retrieval; Machine learning; Natural language processing; Word embeddings

PMID:
30217670
PMCID:
PMC6585427
[Available on 2019-11-01]
DOI:
10.1016/j.jbi.2018.09.008

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center