Format

Send to

Choose Destination

See 1 citation found by title matching your search:

Int J Med Inform. 2014 Oct;83(10):750-67. doi: 10.1016/j.ijmedinf.2014.07.002. Epub 2014 Jul 24.

De-identification of clinical narratives through writing complexity measures.

Author information

1
Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States. Electronic address: muqun.li@vanderbilt.edu.
2
Group Health Research Institute, Seattle, WA, United States.
3
The MITRE Corporation, Bedford, MA, United States.
4
Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, United States; Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, United States.

Abstract

PURPOSE:

Electronic health records contain a substantial quantity of clinical narrative, which is increasingly reused for research purposes. To share data on a large scale and respect privacy, it is critical to remove patient identifiers. De-identification tools based on machine learning have been proposed; however, model training is usually based on either a random group of documents or a pre-existing document type designation (e.g., discharge summary). This work investigates if inherent features, such as the writing complexity, can identify document subsets to enhance de-identification performance.

METHODS:

We applied an unsupervised clustering method to group two corpora based on writing complexity measures: a collection of over 4500 documents of varying document types (e.g., discharge summaries, history and physical reports, and radiology reports) from Vanderbilt University Medical Center (VUMC) and the publicly available i2b2 corpus of 889 discharge summaries. We compare the performance (via recall, precision, and F-measure) of de-identification models trained on such clusters with models trained on documents grouped randomly or VUMC document type.

RESULTS:

For the Vanderbilt dataset, it was observed that training and testing de-identification models on the same stylometric cluster (with the average F-measure of 0.917) tended to outperform models based on clusters of random documents (with an average F-measure of 0.881). It was further observed that increasing the size of a training subset sampled from a specific cluster could yield improved results (e.g., for subsets from a certain stylometric cluster, the F-measure raised from 0.743 to 0.841 when training size increased from 10 to 50 documents, and the F-measure reached 0.901 when the size of the training subset reached 200 documents). For the i2b2 dataset, training and testing on the same clusters based on complexity measures (average F-score 0.966) did not significantly surpass randomly selected clusters (average F-score 0.965).

CONCLUSIONS:

Our findings illustrate that, in environments consisting of a variety of clinical documentation, de-identification models trained on writing complexity measures are better than models trained on random groups and, in many instances, document types.

KEYWORDS:

Electronic medical records; Natural language processing; Privacy

PMID:
25106934
PMCID:
PMC4215974
DOI:
10.1016/j.ijmedinf.2014.07.002
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Elsevier Science Icon for PubMed Central
Loading ...
Support Center