• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of jgimedspringer.comThis journalToc AlertsSubmit OnlineOpen Choice
J Gen Intern Med. Aug 2006; 21(8): 874–877.
PMCID: PMC1831582

BRIEF REPORT: Screening Items to Identify Patients with Limited Health Literacy Skills

Abstract

BACKGROUND

Patients with limited literacy skills are routinely encountered in clinical practice, but they are not always identified by clinicians.

OBJECTIVE

To evaluate 3 candidate questions to determine their accuracy in identifying patients with limited or marginal health literacy skills.

METHODS

We studied 305 English-speaking adults attending a university-based primary care clinic. Demographic items, health literacy screening questions, and the Rapid Estimate of Adult Literacy in Medicine (REALM) were administered to patients. To determine the accuracy of the candidate questions for identifying limited or marginal health literacy skills, we plotted area under the receiver operating characteristic (AUROC) curves for each item, using REALM scores as a reference standard.

RESULTS

The mean age of subjects was 49.5; 67.5% were female, 85.2% Caucasian, and 81.3% insured by TennCare and/or Medicare. Fifty-four (17.7%) had limited and 52 (17.0%) had marginal health literacy skills. One screening question, “How confident are you filling out medical forms by yourself?” was accurate in detecting limited (AUROC of 0.82; 95% confidence interval [CI]=0.77 to 0.86) and limited/marginal (AUROC of 0.79; 95% CI=0.74 to 0.83) health literacy skills. This question had significantly greater AUROC than either of the other questions (P<.01) and also a greater AUROC than questions based on demographic characteristics.

CONCLUSIONS

One screening question may be sufficient for detecting limited and marginal health literacy skills in clinic populations.

Keywords: literacy, health literacy, screening

Nearly half of all adults in the United States have only very basic or below basic English-language literacy skills.13 The problems faced by these individuals when dealing with the health care system were recently summarized in a report by the Institute of Medicine.4

Clinicians are often unable to identify patients with limited literacy skills based on information gathered during routine clinical interactions,5,6 and patients with limited literacy skills are often reluctant to reveal this limitation.79 Furthermore, asking patients questions such as “Can you read?” or “How many years of school did you complete?” does not accurately predict a patient's literacy level.10,11 If a patient's literacy skills are found to be limited, it would alert the clinician to the need for extra care and special approaches in communicating with the patient. Strategies to improve communication with patients include using nonmedical (plain) language, drawing pictures, limiting the amount of information provided, and using a teach-back approach.12

Despite the availability of valid and reliable literacy assessment tools for use in health care settings, such as the Rapid Estimate of Adult Literacy in Medicine (REALM),13 Test of Functional Health Literacy in Adults (TOFHLA),14 and the Newest Vital Sign,15 most clinicians do not screen for limited literacy due to time constraints and/or the potential of embarrassing patients.7 Hence, a need exists to identify screening items that can be easily implemented in busy clinical settings can accurately estimate health literacy skills, and are nonthreatening to patients.

Recently, Chew et al.16 found 3 screening questions to be predictive of limited health literacy skills in a sample of men receiving medical care at a Veterans' Administration clinic. The purpose of our study was to evaluate these 3 screening questions in a patient population demographically different from the one in which they were originally developed and tested, and to do so with a different reference standard (REALM instead of TOFHLA).

METHODS

Study Design and Setting

With approval from the Institutional Review Board of the University of Tennessee Graduate School of Medicine-Knoxville, we recruited English-speaking patients (≥18 years of age) attending a university-based primary care clinic. A research assistant approached patients while they were waiting in an examination room to see a resident physician. The research assistant explained the purpose of the study, informed them that their responses would be anonymous, and that they would be given $10 to compensate them for participating in the study. Patients who appeared too ill or had poor visual acuity were excluded. Patients were only recruited for the study when the research assistant was present and available. Approximately 90% of patients approached in the waiting area agreed to participate.

Interview Process

Patients completed a 3-minute oral interview. The interview began with collection of demographic information using 5 items (sex, age, race/ethnicity, educational attainment, health insurance coverage) from the Behavioral Risk Factor Surveillance Survey.17 Next, patients were asked Chew's 3 health literacy screening questions, each with 5 possible response options14: (1) “How often do you have problems learning about your medical condition because of difficulty understanding written information?” (always, often, sometimes, occasionally, or never); (2) “How often do you have someone help you read hospital materials?” (always, often, sometimes, occasionally, or never); and (3) “How confident are you filling out medical forms by yourself?” (extremely, quite a bit, somewhat, a little bit, or not at all).

Lastly, patients' health literacy skills were measured with the REALM. Based on REALM scores, participants were classified as having limited (≤6th grade reading level; REALM=0 to 44), marginal (7th- to 8th-grade reading level; REALM=45 to 60), or adequate (≥9th grade reading level; REALM=61 to 66) health literacy skills. The REALM has high criterion validity and test-retest reliability.13

Statistical Analyses

Descriptive analyses were performed using version 12.0 of SPSS software. Receiver operating characteristic (ROC) curves were constructed with MedCalc software (version 8.0.0.1). Receiver operating characteristic curves plot a test's sensitivity against 1-specificity; they are used to determine which test, or which cut point on a test, has the best balance of sensitivity and specificity in comparison with a reference standard. The closer the curve follows the left-hand and top borders of the plot—that is, the larger the area under the ROC curve (AUROC), the more accurate the test.

We calculated AUROC for the 3 health literacy screening questions, both individually and in paired combinations with one or both of the other questions, using REALM score as the reference standard, to determine the accuracy of these questions for identifying patients defined as having limited and limited/marginal health literacy according to the REALM. We also calculated sensitivity, specificity, and positive and negative likelihood ratios for the various responses to the screening questions as predictors of limited or limited/marginal health literacy as determined by the REALM.

Finally, we calculated AUROC for 3 demographic characteristics (age [<65 vs ≥65 years], race/ethnicity [Caucasian vs nonCaucasian], and educational attainment [<high school vs ≥high school]), for identifying patients with limited or limited/marginal health literacy. We compared these AUROCs with those for the 3 screening questions to determine which one was the best predictor of limited or limited/marginal literacy. Lastly, we calculated the marginal value of these demographic characteristics with significant AUROCs after adding them to each of the 3 health literacy screening items.

For calculating sample size, we assumed, based on national estimates2 and previous experience with our patient population, that about 20% of patients would have limited health literacy skills (i.e., REALM score ≤6th grade). Using this assumption, our sample size was adequate to determine AUROC for the screening questions within a 5% margin of error with 95% confidence.18

RESULTS

There were 305 participating patients, ranging in age from 18 to 89 years (mean age=49.5±16.5), and 206 (67.5%) were female. Most patients identified themselves as Caucasian (n=260; 85.2%), while 36 (11.8%) were African American and 9 (2.9%) were Hispanic. Eighty-eight (28.8%) had less than a high school education, 119 (39%) were high school graduates or had an equivalency diploma, while 32.1% (n=98) had completed at least some college. Fifty-five (18%) had private health insurance, 175 (57.4%) had TennCare (Medicaid), 73 (23.9%) had Medicare, and 2 (0.7%) had no insurance. Patients' health literacy skills were as follows: limited (n=54, 17.7%), marginal (n=52, 17.1%), and adequate (n=199, 65.2%).

Figure 1 presents the ROC curves of the screening items for detecting limited and limited/marginal health literacy. In all analyses, “How confident are you filling out medical forms by yourself” had a significantly higher AUROC than the other 2 questions (P<.01). The AUROC was 0.82 (95% CI=0.77 to 0.86) for detecting limited health literacy, and 0.79 (95% CI=0.74 to 0.83) for detecting limited or marginal health literacy. Combination of 2 or more screening items did not significantly (P>.10) increase the AUROC for limited or marginal health literacy.

FIGURE 1
Receiving operating curves for screening items using rapid estimate of adult literacy in medicine standards for limited (a) and limited/marginal (b) health literacy skills.

Sensitivities, specificities, and positive and negative likelihood ratios with 95% CI for various responses to this question are shown in Table 1. The best sensitivity and specificity were found when patients answered “somewhat” to the aforementioned question.

Table 1
Performance of Best Screening Item for Detecting Limited and Limited/Marginal Health Literacy Skills

Of the 3 demographic items explored, educational attainment (less than high school vs high school or higher) was the only significant predictor of patients' health literacy. The AUROC for this item, however, was only 0.69 (95% CI=0.61 to 0.78) for detecting limited health literacy, and 0.67 (95% CI=0.60 to 0.74) for detecting limited/marginal health literacy values lower than those for the single screening question. The addition of educational attainment to each of the health literacy screening items did not significantly (P>.10) improve the AUROC of any individual or combination of items beyond what was achieved by these items alone.

DISCUSSION

We recommend using the “somewhat” response to “How confident are you filling out medical forms by yourself?” as the optimal cut point to identify patients with limited or marginal health literacy skills. In our patient population, this cut point detected 83.3% of adults with limited and 76.6% adults with limited/marginal health literacy, with reasonable specificity. There may be other more desirable choices, however, depending on whether sensitivity or specificity is to be emphasized.

Consistent with Chew et al.,16 combinations of multiple questions were no more effective in identifying those with limited or marginal health literacy skills than the 1 single question. Differing from Chew et al.,16 however, we found the question about filling out medical forms to be the most effective at identifying patients with limited health literacy, while Chew found “How often do you have someone help you read hospital materials? to be most effective. This particular item was not as effective in identifying patients with limited health literacy skills in our sample. The reasons for these differences are not clear and point out the need for future research. It may be that different questions produce different results depending on yet-to-be elucidated factors, such as the demographic characteristics of patient populations.

Our findings extend those of Chew et al.16 by reinforcing the notion that limited literacy in health care settings can be detected with a single question, and that the single question can do so better than demographic characteristics. The single question from our study also appears to be more effective in identifying patients with limited health literacy skills than other questions that have been studied previously.19,20 The single question can be administered by any of several different office staff members (e.g., receptionist, nurse) to alert the physician to the need for special communication techniques.

Our study has several limitations, however, that should be considered when interpreting the results. First, patients who enrolled in the study were not randomly selected, raising the possibility of selection bias. The extent and direction of such bias, however, cannot be determined. Second, our study was conducted at a single primary care clinic, which could limit generalization of the results. Third, Chew et al.16 used the short TOFHLA to assess patients' health literacy skills, while we used the REALM. The short TOFHLA and the REALM are strongly correlated with one another,21 so this difference in health literacy assessment should not affect our results.

We found, as did Chew, that a single question has utility in screening for limited health literacy, and that it is a more accurate screen than demographic characteristics. Use of a single question to screen for limited literacy could obviate the need for more formal health literacy assessments in clinical settings.

REFERENCES

1. National Center for Education Statistics. National Assessment of Adult Literacy A First Look at the Literacy of America's Adults in the 21st Century. NCES Publication No. 2006470, December 2005.
2. Paasche-Orlow MK, Parker RM, Gazmararian JA, et al. The prevalence of limited health literacy. J Gen Intern Med. 2005;20:175–84. [PMC free article] [PubMed]
3. Weiss BD. Epidemiology of low health literacy. In: Schwartzberg JG, VanGest JB, Wang CC, editors. Understanding Health Literacy. Chicago, IL: American Medical Association; 2005. pp. 17–39.
4. Institute of Medicine. Health literacy: a prescription to end confusion. In: Neilsen-Bohlman L, Panzer AM, Kindig DA, editors. Washington, DC: National Academy Press; 2004. pp. 59–107.
5. Bass PF, Wilson JF, Griffith CH, et al. Residents' ability to identify patients with poor literacy skills. Acad Med. 2002;77:1039–41. [PubMed]
6. Lindau ST, Tomori C, Lyons T, et al. The association of health literacy with cervical cancer prevention knowledge and health behaviors in a multiethnic cohort of women. Am J Obstet Gynecol. 2002;186:938–43. [PubMed]
7. Brez SM, Taylor M. Assessing literacy for patient teaching: perspectives of adults with low literacy skills. J Adv Nurs. 1997;25:1040–7. [PubMed]
8. Baker DW, Parker RM, Williams MV, et al. The health care experience of patients with low literacy. Arch Fam Med. 1996;5:329–34. [PubMed]
9. Parikh NS, Parker RM, Nurss JR, et al. Shame and health literacy: the unspoken connection. Patient Educ Couns. 1996;27:33–9. [PubMed]
10. Meade CD, McKinney WP, Barnas GP. Educating patients with limited literacy skills: the effectiveness of printed and videotaped materials about colon cancer. Am J Public Health. 1994;84:119–21. [PMC free article] [PubMed]
11. Weiss BD, Coyne CA. Communicating with patients who cannot read. N Engl J Med. 1997;337:272–3. [PubMed]
12. Weiss BD. Chicago, IL: American Medical Association Foundation; 2003. Health Literacy: A Manual for Clinicians.
13. Davis TC, Long SW, Jackson RH, et al. Rapid estimate of adult literacy in medicine: a shortened screening instrument. Fam Med. 1993;25:391–5. [PubMed]
14. Parker RM, Baker DW, Williams MV, et al. The test of functional health literacy in adults: a new instrument for measuring patients' literacy skills. J Gen Intern Med. 1995;10:537–41. [PubMed]
15. Weiss BD, Mays MZ, Martz W, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med. 2005;3:514–22. [PMC free article] [PubMed]
16. Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–94. [PubMed]
17. Centers for Disease Control and Prevention. Behavioral risk factor surveillance system. [May 23, 2005]. Available at: http://www.brfss.gov.
18. Browner WS, Newman TB, Cummings SR, et al. Estimating sample size and power: the nitty-gritty. In: Hulley SB, Cummings SR, Browner WS, Grady D, Hearst N, Newman TB, editors. Designing Clinical Research. 2. Philadelphia: Lippincott Williams & Wilkins; 2001. pp. 65–85.
19. Bennett IM, Robbins S, Al Shamali N, et al. Screening for low literacy among adult caregivers of pediatric patients. Fam Med. 2003;35:585–90. [PubMed]
20. Sanders LM, Zacur G, Haecker T, et al. Number of children's books in the home: an indicator of parent health literacy. Ambulatory Pediatr. 2004;4:424–8. [PubMed]
21. Baker DW, Williams MV, Parker RM, et al. Development of a brief test to measure functional health literacy. Patient Educ Counc. 1999;38:33–42. [PubMed]

Articles from Journal of General Internal Medicine are provided here courtesy of Society of General Internal Medicine
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...