Format

Send to

Choose Destination
J Med Internet Res. 2018 Apr 25;20(4):e139. doi: 10.2196/jmir.9380.

ComprehENotes, an Instrument to Assess Patient Reading Comprehension of Electronic Health Record Notes: Development and Validation.

Lalor JP1, Wu H2, Chen L2, Mazor KM3, Yu H1,4,5,6.

Author information

1
College of Information and Computer Sciences, University of Massachusetts, Amherst, MA, United States.
2
Psychology Department, Boston College, Chestnut Hill, MA, United States.
3
Meyers Primary Care Institute, University of Massachusetts Medical School / Reliant Medical Group / Fallon Health, Worcester, MA, United States.
4
Department of Computer Science, University of Massachusetts, Lowell, MA, United States.
5
Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States.
6
Bedford Veterans Affairs Medical Center, Center for Healthcare Organization and Implementation Research, Bedford, MA, United States.

Abstract

BACKGROUND:

Patient portals are widely adopted in the United States and allow millions of patients access to their electronic health records (EHRs), including their EHR clinical notes. A patient's ability to understand the information in the EHR is dependent on their overall health literacy. Although many tests of health literacy exist, none specifically focuses on EHR note comprehension.

OBJECTIVE:

The aim of this paper was to develop an instrument to assess patients' EHR note comprehension.

METHODS:

We identified 6 common diseases or conditions (heart failure, diabetes, cancer, hypertension, chronic obstructive pulmonary disease, and liver failure) and selected 5 representative EHR notes for each disease or condition. One note that did not contain natural language text was removed. Questions were generated from these notes using Sentence Verification Technique and were analyzed using item response theory (IRT) to identify a set of questions that represent a good test of ability for EHR note comprehension.

RESULTS:

Using Sentence Verification Technique, 154 questions were generated from the 29 EHR notes initially obtained. Of these, 83 were manually selected for inclusion in the Amazon Mechanical Turk crowdsourcing tasks and 55 were ultimately retained following IRT analysis. A follow-up validation with a second Amazon Mechanical Turk task and IRT analysis confirmed that the 55 questions test a latent ability dimension for EHR note comprehension. A short test of 14 items was created along with the 55-item test.

CONCLUSIONS:

We developed ComprehENotes, an instrument for assessing EHR note comprehension from existing EHR notes, gathered responses using crowdsourcing, and used IRT to analyze those responses, thus resulting in a set of questions to measure EHR note comprehension. Crowdsourced responses from Amazon Mechanical Turk can be used to estimate item parameters and select a subset of items for inclusion in the test set using IRT. The final set of questions is the first test of EHR note comprehension.

KEYWORDS:

crowdsourcing; electronic health records; health literacy; psychometrics

PMID:
29695372
PMCID:
PMC5943623
DOI:
10.2196/jmir.9380
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for JMIR Publications Icon for PubMed Central
Loading ...
Support Center