Format

Send to

Choose Destination
J Am Med Inform Assoc. 2020 Feb 11. pii: ocaa001. doi: 10.1093/jamia/ocaa001. [Epub ahead of print]

Does BERT need domain adaptation for clinical negation detection?

Author information

1
Computational Health Informatics Program, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA.
2
School of Information, University of Arizona, Tucson, Arizona, USA.
3
Department of Computer Science, Loyola University Chicago, Chicago, Illinois, USA.
4
Department of Pediatrics, Harvard Medical School, Boston, Massachusetts, USA.

Abstract

INTRODUCTION:

Classifying whether concepts in an unstructured clinical text are negated is an important unsolved task. New domain adaptation and transfer learning methods can potentially address this issue.

OBJECTIVE:

We examine neural unsupervised domain adaptation methods, introducing a novel combination of domain adaptation with transformer-based transfer learning methods to improve negation detection. We also want to better understand the interaction between the widely used bidirectional encoder representations from transformers (BERT) system and domain adaptation methods.

MATERIALS AND METHODS:

We use 4 clinical text datasets that are annotated with negation status. We evaluate a neural unsupervised domain adaptation algorithm and BERT, a transformer-based model that is pretrained on massive general text datasets. We develop an extension to BERT that uses domain adversarial training, a neural domain adaptation method that adds an objective to the negation task, that the classifier should not be able to distinguish between instances from 2 different domains.

RESULTS:

The domain adaptation methods we describe show positive results, but, on average, the best performance is obtained by plain BERT (without the extension). We provide evidence that the gains from BERT are likely not additive with the gains from domain adaptation.

DISCUSSION:

Our results suggest that, at least for the task of clinical negation detection, BERT subsumes domain adaptation, implying that BERT is already learning very general representations of negation phenomena such that fine-tuning even on a specific corpus does not lead to much overfitting.

CONCLUSION:

Despite being trained on nonclinical text, the large training sets of models like BERT lead to large gains in performance for the clinical negation detection task.

KEYWORDS:

deep learning; domain adaptation; machine learning; natural language processing; negation

PMID:
32044989
DOI:
10.1093/jamia/ocaa001

Supplemental Content

Full text links

Icon for Silverchair Information Systems
Loading ...
Support Center