• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Res Educ Eff. Author manuscript; available in PMC Jan 31, 2012.
Published in final edited form as:
J Res Educ Eff. 2011; 4(2): 118–13.
doi:  10.1080/19345747.2011.555290
PMCID: PMC3269067
NIHMSID: NIHMS337800

Relative Effectiveness of Reading Intervention Programs for Adults with Low Literacy

Abstract

To compare the efficacy of instructional programs for adult learners with basic reading skills below the seventh grade level, 300 adults were randomly assigned to one of three supplementary tutoring programs designed to strengthen decoding and fluency skills, and gains were examined for the 148 adult students who completed the program. The three intervention programs were based on or adapted from instructional programs that have been shown to benefit children with reading levels similar to those of the adult sample. Each program varied in its relative emphasis on basic decoding versus reading fluency instruction. A repeated measures MANOVA confirmed small to moderate reading gains from pre- to post-testing across a battery of targeted reading measures, but no significant relative differences across interventions. An additional 152 participants who failed to complete the intervention differed initially from those who persisted. Implications for future research and adult literacy instruction are discussed.

The 2003 National Assessment of Adult Literacy (NAAL) reported that some 30 million U.S. adults (about 14% of the adult population) scored Below Basic in Prose Literacy skills, i.e., the skills needed to search, comprehend, and use continuous texts (Kutner, Greenberg, Jin, Boyle, Hsu, & Paulsen, 2006). Adults who score Below Basic range from nonliterate in English to having only the skills needed to locate easily identifiable information in short, commonplace prose texts (Kutner et al., 2006). Below Basic adults were more likely than the general population to be African American (20 vs. 12%) or Hispanic (39 vs. 12%), to be older than 65 (26 vs. 15%), and to report multiple disabilities (21 vs. 9%). Many (44%) did not speak English before starting school in the U.S. About 45% of Below Basic adults graduate from high school. The need for effective educational services to boost the reading proficiency of this large, and growing, population is perceived as a pressing economic and social need (Kirsch, Braun, Yamamoto, & Sum, 2007; Miller, McCardle, & Hernandez, 2010). Adults who participate in adult education programs resemble the Below Basic group from the NAAL sample and, similarly, are a more diverse and disadvantaged population in terms of age, race/ethnicity, place of birth, and educational attainment than the U.S. population as a whole (Tamassia, Lennon, Yamamoto, & Kirsch, 2007).

While the NAAL documents the prevalence and demographics of adults scoring Below Basic, the survey provides little information about the basic skill profiles of these adults. To examine their basic reading skills, Baer, Kutner, and Sabatini (2009) administered decoding, word recognition, and oral passage reading tasks to the same random national sample of 19,714 adults who were administered the 2003 NAAL (Kutner et al., 2006). Adults who scored in the Below Basic level on the NAAL Prose Literacy measure of comprehension read significantly fewer words correctly per minute (wcpm) than did those whose reading comprehension was at the Basic, Intermediate, or Proficient level. An average rate of 60 wcpm or slower on oral reading tasks was found for 49% of adults in the Below Basic group, but for only 11% of Basic level readers and 3% of those from the higher ranges. Likewise, the percentages who read fewer than 75 wcpm were 71%, 23%, and 7% for the three levels, respectively. Thus, most adults whose prose literacy comprehension skills were in the NAAL Below Basic range were highly inefficient in carrying out basic reading skills like decoding and fluency.

In fact, studies of low-literate adults have consistently documented deficiencies in word recognition, decoding, and spelling skills (Bell & Perfetti, 1994; Bruck, 1990, 1992, 1993; Byrne & Ledez, 1983; Greenberg, Ehri, & Perin, 1997, 2002; Lukatela, Carello, Shankweiler, & Liberman, 1995; Pratt & Brady, 1988; Read & Ruyter, 1985; Sabatini, 2002). Weaknesses have also been observed in phonological awareness, vocabulary, and other cognitive-linguistic skills that are known to be related to reading abilities in children (Greenberg et al., 1997, 2002; Sabatini, 2002, 2003; Venezky, Bristow, & Sabatini, 1994). This research provides a compelling rationale for instruction to target basic reading skills.

Intervention to Strengthen Basic Reading Skills of Adult Learners

Contemporary practices in adult reading instruction have largely evolved from grassroots efforts with few resources for conducting program evaluation. Hence, relatively little is known about the effectiveness of interventions overall, about which instructional components have the greatest impact on outcomes, or about what kinds of instruction are most suitable for improving the basic reading skills of adults (Kruidenier, 2002; Venezky, Oney, Sabatini, & Jain, 1998). At present, nearly all adult education programs include instruction in text comprehension, but programs vary a great deal in the degree to which they also target improvement in word-level accuracy and efficiency or reading fluency. Based on reviews of extant research findings, the National Reading Panel (National Institute of Child Health and Human Development: NICHD, 2000) concluded that for children with limited word recognition and decoding skills, explicit phonics instruction is most effective; for children who are not fluent readers, guided reading programs are effective; and for children with comprehension difficulties, a broad array of strategies should be taught. Publications designed for the adult education sector recommend such instruction for adults (e.g., Kruidenier, 2002; McShane, 2005), but as of yet there is scant evidence to support or reject these recommendations to adapt and apply instructional methods that are used with younger reader populations.

The primary goal of the present study was to investigate the fruitfulness of adapting explicit phonics and guided reading instruction for use with adults with limited word recognition skills, in order to determine whether such interventions are effective supplements to conventional literacy programs. Traditionally, adult literacy instructors may understandably have been reluctant to base instructional decisions on findings from intervention studies with samples of children. However, because the basic reading skills of low-literate adults often resemble those of children with limited word recognition and decoding levels (Fowler & Scarborough, 1993; Greenberg et al.,1997, 2002; Liberman et al., 1985; Perfetti & Marron, 1995; MacArthur, Konold, Glutting, & Alamprese, 2010; Mellard, Fall, & Woods, 2010; Nanda, Greenberg, & Morris, 2010; Sabatini, 2002, 2003; Sabatini, Shore, Sawaki, & Scarborough, 2010), it is possible that the same approaches that succeed with younger learners, whether novice readers or older children with reading difficulties, could be effective with adults who have similar levels and profiles of skill, when appropriately adapted for them. However, these same sources note some differences in adult profiles in comparison to comparable groups of children and therefore it is important to investigate whether such instructional approaches are indeed effective when applied with adults.

To that end, we studied students from participating classrooms in adult education centers. Like most literacy programs nationwide, the core instructional approaches in adult literacy classrooms aim to address broad literacy development that encompass reading comprehension, writing, speaking, listening, mathematics, and functional life skills (Tamassia et al., 2007). Classroom observations, teacher interviews, and inspection of classroom materials confirmed that although some oral reading, spelling exercises, and word analysis activities took place occasionally in some classrooms, there was little or no explicit concentration placed on systematically strengthening decoding, word recognition, or fluency skills. From those centers, we used screening tests to identify those with limited basic reading skills, and randomly assigned eligible participants from each classroom to one of three supplemental tutoring programs. This design represents an attempt to control for classroom differences in the content and intensity of the instructional services that are ordinarily provided, which the participants in our interventions also simultaneously received. Thus, adults in this study received extra instruction; their regular classroom activities and the supplemental tutoring.1

Each of the intervention programs was based on an instructional approach with demonstrated efficacy for improving the basic reading skills of children, but was adapted to be suitable for one-on-one tutoring of adult learners. Each approach was geared toward improving word recognition, decoding, and fluency, so the programs necessarily had much in common. However, the methods for teaching these skills were dissimilar, and the relative emphasis on the two components varied considerably, reflecting the differing conceptual frameworks that guided the design of these programs.

The programs were: an adaptation of Corrective Reading (CR; Engelmann, 1999) for individual tutoring of adults; an adult-appropriate modification of Retrieval, Automaticity, Vocabulary Elaboration – Orthography (RAVE-O; Wolf, Miller & Donnelly, 2000; Wolf, Barzalai, Gottwald, Miller, Spencer, Norton, Lovett, & Morris, 2009); and a Guided Repeated Reading program designed specifically for adults (GRR; Shore, 2003). Each instructional program is described in more detail in the Methods section. The time devoted to instruction and practice in phonics and in fluency during tutoring sessions was estimated as follows: for CR, 80–90% on phonics and 10–20% on fluency; for RAVE-O, 25–35% on phonics and 65–75% on fluency; and for GRR, less than 10–20 % on phonics and 80–90 % or more on fluency. Given these differences, we might predict a significant interaction of program type with that gains in reading subskills that would mirror the instructional emphases, i.e., the strongest gains in decoding for the CR program, the strongest gains in fluency for the GRR program, and gains somewhere in between for both components for the RAVE-O program. Furthermore, we might expect that as individuals show improvement in the accuracy of their word and text reading, they also might show improvement in word and text reading efficiency, typically measured as words read correctly per unit of time (Baer et al., 2005). However, those programs that made this an explicit goal of instruction, i.e., RAVE-O for word reading efficiency and GRR for text fluency, might show stronger gains on measures sensitive to these aspects of basic skills.

Growth in reading comprehension is, of course, a desired outcome of the general adult education classes that tutored participants also attended, but skills and strategies for comprehending text were not a direct, explicit target of our supplemental interventions. In research with children, growth in comprehension has been observed to occur as an indirect consequence of improvement in decoding, word recognition, and fluency skills (NICHD, 2000). Given that the adults in this study were receiving both some direct instruction in comprehension (in their regular literacy classrooms) and supplemental tutoring in basic skills, we might expect to see relatively robust gains in comprehension. However, as adults may have more experience applying compensatory literacy skills to comprehend successfully, they may or may not show significant immediate gains on reading comprehension as a result of improved decoding or fluency skills (though they should show gains because of the general literacy program).

Student Attrition and Persistence

Student attrition is a longstanding and continuing challenge for adult basic education programs. Attrition rates of 40% or higher are typical between the time of assignment to an educational program to post-intervention testing some 50 to 100 hours later (Dirkx & Jha, 1994). It is beyond the scope of this article to summarize the research on the barriers to persistence faced by adult students and the means used by programs to improve retention (see Comings, Parella, & Soricone, 1999). Attrition is also a serious challenge for researchers who conduct randomized experiments of intervention effects, because high attrition rates prevent valid generalizations to the full population of adult basic education students. For example, in the present study, about 50% of the students dropped out of interventions before final post-testing. Thus, we interpreted our results only with respect to adults who persisted in our implemented programs.

It can be valuable, however, to analyze and describe differences between those who persist until the end of an intervention versus those who exit early (e.g., Venezky et al., 1994). These types of analyses may permit hypotheses to be generated about how individual characteristics interact with participation in intervention programs. To address this question, in this study we compared characteristics of a sample of adult students who met the initial screening criteria, but were not part of the intervention study, the sample of adults that was assigned to receive intervention but did not persist to post-testing, and the subset who persisted until the end of the intervention period.

Research Questions

By adding a supplemental tutoring program (up to 45 additional one-on-one tutoring sessions in a 15-week span) in addition to the intensity of routine adult classroom instruction (on average about 3–6 classroom hours per week), we hope to see indications of accelerated growth in word reading and fluency skills as evidenced by strong effect sizes. The design of this study potentially doubles the exposure of adults to instruction in general, as well as targeting additional instruction on specific basic reading subskills that they lack. Given these considerations, the following research questions were addressed:

  1. Did adults who persist to the end of supplemental basic skills interventions show improvement in decoding accuracy and efficiency, word recognition accuracy and efficiency, reading fluency, or reading comprehension? How robust were any observed gains?
  2. Were any of the supplemental interventions relatively better than the others in improving any of these skills? Did the different instructional programs produce different gains on different reading skills?
  3. Do the skill profiles of those who complete the supplemental interventions differ from those of non-completers and other adult learners in similar programs and classrooms?

Method

Participants

Participants were adults who sought literacy services at several large adult education centers in two major cities in the mid-Atlantic and southern regions of the United States. Eligibility criteria were: (1) a score below the 7th grade equivalent range on a word recognition test given to all consenting adults; (2) if not a native English speaker, a score of “low-advanced” level or higher on the BEST Plus oral performance subtest (Center for Applied Linguistics, 2001) to ensure a sufficient level of English proficiency; (3) no uncorrected vision or hearing problems that would markedly interfere with assessment or instruction; (4) no reported history of services for mental retardation, brain injury, or severe mental health problems. A sample of 579 individuals met these criteria. Their ages ranged from 17 to 76 years (M = 36); they were 67% female, 83% African American, 9% Latino(a), 8% white. The intervention subsample (described next) showed similar demographic distributions.

Three subsamples are reported on in this report. Separate from the intervention study, 279 adults who met the above criteria were just given a large battery of assessments, but no post-tests. This sample was collected from some of the same literacy programs as the intervention sample, but mostly before the interventions study commenced. Henceforth they are labeled as the Non-intervention group. For the intervention study, another 300 adults screened as eligible.2 These 300 were assigned to a supplemental intervention, with 148 completing at least 10 instructional sessions and all post-tests.3 This group is henceforth labeled as the Completers. The 152 who dropped out of the intervention before completing post-testing (as well as a subset of 10 students who completed fewer than 10 instructional sessions) are henceforth labeled the Non-completers.

Assessments

Screening for eligibility

To determine initial word recognition levels, the Reading subtest of the Wide Range Achievement Test – 3 (WRAT; Wilkinson, 1993) was used.

Full assessment battery

All participants in the study were administered the following battery of tests. Split-half reliabilities for adult age ranges 18 to 79 appear in brackets. From the Woodcock-Johnson III Tests of Achievement (WJ: Woodcock, McGrew, & Mather, 2001), we administered: the Reading Cluster, consisting of the Letter-Word Identification (WJWID; r =.89–.98) Word Attack (WJWAT; r =.82–.93), Passage Comprehension (WJCOMP; r =.75–.92), and Reading Fluency (WJFLU; r =.88–.93) subtests; the Oral Language Cluster, consisting of the Story Recall (WJSR; r =.84–.90), Understanding Directions (WJUDW; r=.73–.93), Picture Vocabulary (WJPV; r=.80–.92), and Oral Comprehension (WJOC; r=.77–.94) subtests; and the Spelling of Sounds subtest (r=.58–.86). From the Tests of Word Reading Efficiency (TOWRE: Torgesen, Wagner, & Rashotte, 1999), subtests requiring the speeded reading of real English words (Sight Word Efficiency) and of pseudowords (Phonemic Decoding Efficiency) were administered. Participants also took other tests and a structured interview regarding their educational histories, prior adult education experiences, reading habits, expectations, and other background information, but those will not be discussed in this paper.

Participants in the tutoring study were also administered midpoint and post-intervention assessments with the WJ Reading Cluster and the TOWRE. In this report, we only report on pre- and post-test performance. Scores and analyses for the WJ are reported using W-Scores which the technical manual reports as a Rasch scale with equal interval units that spans all developmental levels (Woodcock & McGrew, 2001). For the TOWRE scores, we report using raw score values, as the standard score scale for adults is based on a small sample that is not representative of the low ability range of adults in this study.

Intervention Programs

The CR program grew out of the traditional phonics instruction for treating reading disabilities and is widely used for that aim with adolescent students. The main focus is on strengthening and expanding the reader’s mastery of grapheme-phoneme correspondences and word recognition. This instruction is delivered in a highly systematic and sequenced way by trained tutors. CR teaches the structure of words in the English language with an explicit and systematic system. It provides a complete curriculum for teaching decoding and spelling, with phonemic analyses taught in relation to syllable types. A placement test is given at the beginning of the program to determine at what lesson level students should begin. Students move from a phonological focus to a more word level practice, eventually learning to process words more quickly by recognizing patterns and reading in context. Although most (at least 90%) of the instructional time is devoted to decoding and word recognition, opportunities to gain fluency are also provided by having students read controlled (decodable) texts. Of our three programs, only the CR was previously administered with low-literate adults (Greenberg, Fredrick, Hughes, & Bunting, 2002), but no experimental study results have been published.

The second experimental program, RAVE-O also supplements phonics instruction with fluency training, but with a stronger emphasis on the latter. This approach derives from the Double Deficit hypothesis for explaining reading disabilities in children, which posits that either of two deficits – in phonological processing or in naming speed – can seriously impede reading acquisition, with the most severe reading disabilities occurring when both deficits are present (Wolf et al., 2000, Wolf et al., 2009). The RAVE-O curriculum is designed to meet the needs of individuals with a naming speed deficit (defined by performance on rapid automatized naming tasks) or a double deficit. When combined with a systematic phonics program (in this study, an abbreviated version of CR), the needs of all three subtypes are addressed. A RAVE-O intervention study of 44 children (grades 1–3) in an after-school program found significant pre- to post-test standard score gains, with 77% of learners showing improvement in word recognition, 65% in pseudoword decoding, 73% in text comprehension, 86% in speed of word reading, and 92% in speed of pseudoword decoding; effect sizes ranged from .20 to .65 (Adams & Joffe, personal communication). A study by Morris et al. (in press) of 279 second and third grade students enrolled in one of four intervention programs found that students who received the RAVE-O outperformed students in other interventions in measures of decoding, word reading, connected text reading, and comprehension.

The third experimental program, GRR, relies exclusively on guided repeated reading to strengthen text fluency skills, with occasional embedded phonics instruction. GRR has been shown to be effective with children and adolescents (NICHD, 2000), but has not been formally examined in adult literacy contexts. The implementation here most closely resembles that of Rasinski, Padak, Linek, and Sturtevant (1994). Typical lessons include (1) teacher modeling of oral reading with a specific, short text; (2) a shared reading of the text with both the student and teacher reading orally in unison; and (3) individual students orally reading each text up to three times during the same session. Text passages were brief (under 350 words), contained predictable and rhythmic text to promote fluent reading, and were selected based on reading level, subject area, and potential interest to adult learners in topics such as family life, health, finance, and human interest (see Shore et al., 2010). Rasinski et al. (1994) found larger gains in oral reading rate for elementary classrooms the received GRR instruction than for a comparison group. Although designed for entire class participation, GRR has been successfully delivered via individual tutoring of children (O’Shea, Sindelar & O’Shea, 1985; Shany & Biemiller, 1995).

Procedures

Prospective participants were screened and identified as eligible between February 2003 and March 2008. From each participating adult literacy classroom, triads of three eligible students were identified, then randomly assigned to one of the three intervention programs. If a student left before completing the intervention, the next available eligible student from the same classroom was assigned to the same intervention condition as the student who left, subject to scheduling and tutor availability.

Instructional sessions took place about three times per week for approximately 10–18 weeks, with the goal of completing approximately 45 sessions of 75 minutes each, although students were permitted to continue if they had not covered the entire curriculum. The mean number of completed sessions was 35 (SD = 12), ranging from 12 to 63, with about 25% of students completing more than 45 sessions. The mean completed sessions was similar across all three interventions.

A high attrition rate (50%) and the large number of classrooms (46 in total) resulted in unbalanced participation within and across classrooms. In 20 classrooms, three or more students completed the intervention to post-test; in 11 of the classrooms, only two students completed; in 15, only one student completed. Even within classrooms that had multiple completers, because of open enrollment policies and the duration of the project, students were often participating in different cycles of instruction, rather than concurrently. Fortunately, attrition and uneven participation levels were similar across all three interventions.

Tutor recruitment, training and fidelity monitoring

Tutors were required to have a Bachelor’s degree from an accredited college or university, and experience and comfort with technology. All were motivated individuals with an interest in supporting the learning experiences of the struggling readers, and most had some experience in reading or writing instruction, tutoring adults, and/or reading research. Tutors were assigned randomly to learn and teach one of the three one-on-one tutoring programs.

For each training program, tutor training included a one-day (5–6 hour) workshop with other new tutors, two individual follow-up meetings of about 1–2 hours each with experienced tutors or program trainers, review and practice with the materials, role-plays, and reviews of sample sessions. Training was developed by research staff, in consultation with program developers or intervention experts. Total training time and individual review and practice was approximately 12–18 hours. Conferences with tutors were conducted regularly during implementation of interventions to ensure that techniques learned in the training were being consistently applied.

I’ve cut the following sections because although they’re good, they have too much detail, considering that no DATA about fidelity or observations is being reported here, as far as I can tell.

Monitoring of treatment fidelity

All tutorial sessions were audio recorded, and trained expert peer mentors reviewed a random sample of 10% of session recordings in the first few months of tutorials to evaluate adherence to each instructional program. When a tutor received at least three consistently reliable reviews, they were categorized as expert tutors and their sessions were reviewed less frequently. Across programs, tutors were fairly consistent in their adherence to each program, with ratings of overall fidelity to the program of 73% for CR and GRR and 77% for RAVE-O based on reviews. Tutors also kept daily logs documenting the nature and duration of each activity, as well as total time; these were used to augment the recordings.

Literacy classroom observations

In 26 of the 46 classrooms, observations were made of classroom instruction using a rubric developed for this purpose. Observers recorded time spent on various activities in the following major areas: reading comprehension, vocabulary, decoding, writing, spelling, reading strategy instruction, oral reading, mechanics, grammar, reading rate, formal/summative assessment and administrative tasks. Teachers were also interviewed before and after observations and provided observers with any supplemental materials they used during class. These observations confirmed, as noted previously, that although some oral reading, spelling exercises, and word analysis activities took place occasionally in some classrooms, more typically there was little or no explicit concentration placed on strengthening decoding, word recognition, or fluency skills.

Results

To compare pre- versus post-intervention reading performance of the 148 participants who completed at least 10 sessions of tutoring, we conducted a 2 × 3 repeated measures MANOVA with time of test as the within-subject repeated measure, and intervention condition as the between-subject factor.

Table 1 shows descriptive statistics for pretest, posttest, and change scores for the total sample of 148 students who completed interventions, and for each intervention condition. Although raw scores for TOWRE Phonemic Decoding scores are reported in the table, this distribution was highly skewed, so a natural log transformation was used in the MANOVA. Performance on the test battery improved over time, F(6, 140) = 24.53, p < .001, partial eta squared = 0.51. For the total sample, significant gains were observed for each of the six subtests. There was no main effect of tutoring condition, F(6, 140) = 0.20, p = .998, nor any interaction with time, indicating that gains were not conditioned on which instructional program students received. Thus, all the instructional programs (literacy classes plus supplemental intervention) were relatively effective in helping students to improve their basic reading skill scores at post-test in comparison to pre-test.

Table 1
Pre to post-test mean and standard deviations for the Corrective Reading (CR; n=48), RAVE-O (n=50), and Guided Repeated Reading (GRR; n=50) conditions and for the entire sample of intervention completers. Cohen’s d is the effect size statistic. ...

The strongest effects were observed for basic decoding (d = .46) with the mean gains in the Corrective and RAVE-O programs slightly larger than those for Guided Repeated Reading, though as noted, the mean gains across these different interventions were not significant. In t-tests, the GRR showed a significant pre to post-test effect on reading comprehension; the other two programs did not. The RAVE-O showed significant gains for both Sight Word and Decoding Efficiency; the other two programs did not.

Table 2 compares pretest scores for the non-intervention (n = 279), non-completing (n = 150), and completing (n = 148) samples. The completers earned significantly lower pretest scores on all of the reading subskills. Post hoc tests showed a significant negative contrast of the Completers to the other groups on WJ Word Attack, Passage Comprehension, Reading Fluency, and TOWRE Sight Word Efficiency, though relative only to the Non-intervention group on Word ID. The Non-completers were more similar to the Completers in TOWRE Phonemic Decoding raw score average, but lower than the Non-Intervention group. There were no significant differences across the groups on the WJ Oral Comprehension and Picture Vocabulary subtests, except for the Understanding Directions subtest, for which the Completers had statistically significantly lower scores than the other groups. The Completers group was also found to be an average of six years older than the other groups.

Table 2
Mean (SD) Performance on Pretest for Non-intervention (279), Intervention Non-completers (151), and Intervention Completers (n=148) and significance of ANOVA for group mean differences

Discussion

As a consequence of our eligibility criteria, the sample’s average ability level for basic decoding and fluency skills was substantially below age-appropriate adult levels, and instead at the 3rd to 4th grade (8–10 year old) level (see Sabatini et al., 2010 for more details). Most of these adults had completed at least 9 to 10 years of formal schooling during childhood and adolescence. Moreover, many had participated previously in adult education programs. Nonetheless, prior instruction had not remediated their basic skills deficiencies. It also bears noting that (a) typical adult literacy classroom instruction that does not target these basic skills is not likely to produce strong or accelerated gains in basic decoding or fluency skills; and (b) an examination of norms tables for standardized tests indicate that the expected skill levels of basic reading skills do not vary with age during adulthood. Therefore, we argue that any substantial gains that are observed over a 15-week period of instruction are unlikely to arise from simple maturation alone or to be produced mainly by the general instruction that participants typically received in adult literacy classrooms. Rather, for the most part, it is reasonable to attribute any observed gains largely to the supplemental tutoring that directly and intensively addressed the participants’ deficiencies in decoding and fluency. From that vantage point, we discuss each of our research questions.

1. Did adults who persisted to the end of supplemental basic skills interventions show improvement in decoding accuracy and efficiency, word recognition accuracy and efficiency, reading fluency, or reading comprehension? How robust were any observed gains?

The data support an affirmative answer to the question of whether basic gains occurred from pre- to post-intervention assessments. With respect to decoding (i.e., Word Attack), the effect size was moderate overall (d = .46), and with respect to Reading Fluency the effect size was between small and moderate (d = .34). However, gains were more modest (d = .19 to .21) for Word Identification, Reading comprehension, Sight Word Efficiency, and Phonological Decoding Efficiency. These gains reflect the combined benefits of the supplemental tutoring and the regular literacy program instruction. Because students were observed to read aloud and discuss texts in the literacy classrooms, as noted earlier, this is a potential contributor to the reading fluency gains, but is much less likely to underlie the findings for Word Attack.

The modest effect size for Passage Comprehension does not provide strong evidence that supplemental skills instruction in combination with typical classroom activities could accelerate growth in reading comprehension. As noted at the outset, it may be that adults have compensatory skills for boosting their comprehension levels above what might be predicted solely from their lexical level skills. If so, the adults relatively elevated initial comprehension levels may moderate any direct impact of improved basic level skills on comprehension performance. We note also that the practical size of the gains on the TOWRE word level efficiency measures was about 1–2 raw score units, which reflects about 1–2 more words read correctly in 45 seconds.

2. Were any of the supplemental interventions relatively better than the others in improving any of these skills? Did the different instruction programs produce different gains on different reading skills? The three tutoring conditions differed in their relative emphasis on decoding versus fluency, and differed also in the reading materials and instructional activities used in instruction. However, the data do not support the hypothesis that any of these programs was superior to any other overall, nor that gain in particular skill areas were tied to the instructional program. The MANOVA results for intervention type were not significant. An examination of the effect sizes (Table 1) reveals some apparent differences across programs. For example, with respect to decoding, results were in the direction consistent with the relative emphases of each program, with stronger gains on decoding in the CR and RAVE-O interventions which targeted these skills; and stronger gains in Word Reading Efficiency for the RAVE-O which targets these skills. The GRR program did not however even show trending for stronger effect sizes than the CR in reading fluency, and RAVE-O showed the least gain. However, it must be emphasized that none of these apparent differences between interventions in locus of effects is supported by inferential statistical tests, so there is no evidence that these effect size patterns are real and replicable rather than chance occurrences. Given the unevenness of the randomization with respect to completers per classroom, the large attrition rates, and the modest effect sizes overall, it is probably most prudent not to make too much of these trends in the absence of converging evidence for their reliability.

3. Do the skill profiles of those who complete the supplemental interventions differ from those of non-completers and other adult learners in similar programs and classrooms?

Adults who persisted to completion of interventions were on average older, and had poorer basic reading skills, especially Word Attack, than the Non-Completers and Non-Intervention samples of adults. The Non-Completers were significantly lower in Word Attack than the Non-Intervention group, but otherwise their mean scores resembled those for the Non-Intervention group. The three groups did not differ, notably, in vocabulary and oral comprehension, although the Completers had significantly lower scores on the Understanding Directions portion of the Oral Language cluster. The issue of which pre-test variables predict response to interventions will be explored in subsequent reports.

Limitations

The absence of a control group clearly prevents us from differentiating gains that are attributable to the supplemental interventions from gains that might have resulted from routine classroom instruction. As noted, we had hypothesized that the combination of the interventions in addition to the routine classroom instruction would provide an added boost to learning, evident in strong effect sizes. This was not found.

A second limitation concerns the randomization of the students within classrooms as it interacts with the overall attrition rates. As noted in the procedures and discussion, the duration of the project (3 years of 15-week intervention cycles), the large number of classrooms participating (46 across the 3 years) with small pools of eligible students (about 3–6 students per semester), and high attrition rates (about 50%), led to some difficulties in meeting the systematic assumptions of this design. One can still claim a version of randomness, or arbitrariness at least, in how a particular student’s classroom experience was related to which supplemental program to which they were assigned. Each intervention experienced roughly the same rate of attrition.

Another limitation is that, with 148 completers of 300 starters, the power to detect differences across interventions was reduced. The program we implemented was designed to provide as much intensity as is possible in adult literacy program settings, had strict attendance policies, utilized highly-trained tutors with fidelity monitoring controls, was supplementary to regular classroom literacy services, and was set to a limited duration of a 15-week period (approximately 45 hours of instructional contact) to maximize completion. This approximated the intensity of a college-level course. Even with these efforts, attrition was high (about 50%) and there was considerable cost incurred to implement this design.

Finally, the composition of the sample constitutes another limitation. The sample was primarily an urban, African American population with a wide age range and inclusive of many potential interacting health and disability factors. This sample is representative of the composition of many literacy programs and representative of a significant segment of the adult literacy population in need. Nonetheless, it limits generalization to other groups and raises questions about specific interactions that may have restricted results.

Conclusions and Some Implications

The findings from this study provide some insights into the effectiveness interventions targeting basic reading skills, as well as how basic skills approaches may relate to overall reading comprehension ability in adults. The positive news is that each intervention resulted in significant gains in basic reading skills for students who completed the programs. However, there were no significant intervention group by component skill interactions, though there was some trending for the programs targeting decoding to show larger gains aligned with the instructional focus of each program.

With respect to the magnitude of gains in basic skills and reading comprehension, despite our best attempt to intensify instructional exposure by supplementing a typical adult program classroom, and to target that added instruction in one-to-one tutoring in basic skills foundational to reading comprehension, we only showed modest effect sizes. While the strongest effect sizes were related to basic decoding, the adults were also near the very bottom of the ability scale, and one cannot rule out attributing some of the gains to regression to the mean. Thus, while there is no reason not to believe that adults improved their basic decoding, the magnitude of this effect may be somewhat overestimated.

As noted, this sample was selected for supplemental services because of their low word recognition skill (average near 3rd grade level) and most participating adults had poorer decoding skills than one would predict from observed word identification levels (see Sabatini et al., 2010 for more details). Improving their decoding of printed words and the fluency of word recognition and text reading has rarely been the central focus of instruction in traditional adult education programs. While many of the adults showed significant growth across multiple outcome measures, many also showed minimal growth despite the programs, and even those who showed significant progress were far from mastery levels seen in skilled adult readers (or even skilled sixth graders). The results here suggest that for many, mastery of the kind of automatic, efficient decoding skills of skilled adult readers would require many more hours of instruction and practice. Perhaps, when a critical number of instructional hours are surpassed, we would see evidence of a steeper growth pattern, but given the current structural constraints of most adult literacy programs, that is an atypical occurrence.

Given that the adults in this study continued to attend typical literacy classes in addition to the intervention and therefore were exposed to some instruction in vocabulary and comprehension, it is difficult to attribute any direct impact of the supplemental program on comprehension gains, which were small. It seems likely rather that the adults in the study had already learned some strategies to compensate for weak decoding skills, poor word recognition, and low vocabulary. Consequently, initial gains in those skills may not show up as marked improvement in comprehension scores until such time as accumulated knowledge via learning and practice have time to take hold. Indeed, successful compensation of this sort has been shown in dyslexics with high verbal ability (Swanson, 2009). Monitoring the relative growth over time both of basic reading skills and any resulting improvements in integrated, overall comprehension performance levels, remains to be conducted in future studies for us to gain a better understanding of what instruction (and how much) works best for whom.

With respect to our choices concerning intervention implementation, we chose the individual tutoring approach as it is a viable option for service provision for adult programs, anticipating that rigorous training of tutors could be designed and implemented, should we find positive impacts with the highly-trained tutors employed in this study. This option is not ruled out here. The training and fidelity of implementation achieved with our professional tutors can be encapsulated in training packages and monitoring techniques that could be deployed to volunteer tutors. To be sure, there is a cost associated with the training of volunteer tutors, but this is only a marginal increase in costs for programs with tutoring services. On a large enough scale, studies of the effectiveness of such programs could be undertaken, but no doubt attrition would be an issue as it has been in the history of adult literacy instruction studies.

In this study, as others in this special issue, the efficacy of the results are disappointingly weak and do not provide a clear roadmap for improving instruction. Nevertheless, significant gains in basic skills were made, but we need further research to understand the mechanisms by which instruction for adults can be optimized to meet the needs of adult learners.

Acknowledgments

This research was funded by a grant (HD 043774) from the National Institute for Literacy, Office of Vocational Education, and Eunice Kennedy Shriver National Institute of Child Health and Human Development. We are grateful to the National Center for Educational Statistics and to Kelly Bruce and Jen Lentini for assistance with data collection and management. We also thank Maryanne Wolf, Katherine Donnelly Adams, and Terry Joffe of Tufts University for use and expertise in adapting the RAVE-O program; Timothy Rasinski for expert consultation regarding the Guided Repeated Reading program; and Marcy Stein, University of Washington, for assistance with training and materials for the SRA Corrective Reading Program. Preliminary analyses of data from this project were previously reported in presentations to the American Educational Research Association and the Society for the Scientific Study of Reading. Any opinions expressed in the publication are those of the authors and not necessarily of Educational Testing Service.

Footnotes

1Complexities in implementing this design and their implications are discussed in the Procedures and Limitations sections.

2Another 28 adults were piloted in a fourth, non-randomized, intervention trial, which was discontinued in the study. Their data has been excluded in this report. Another 43 adults were also post-tested without receiving any intervention. They are included in the subsample of the non-intervention group, but we are not reporting on their post-test scores in this report. Finally, two students were dropped from analyses because of missing pre or post-test data.

3Ten sessions represents 12 to 15 hours of direct instruction using the intervention materials across 3 to 5 weeks (as well as regular classroom instruction); substantial coverage of a sequence of decoding lessons or repeated readings that might result in a gain at post-test. In addition, total number of sessions was not a significant correlate of gains, nor changed the results when entered as a covariate in the MANOVA model.

Contributor Information

John P. Sabatini, Educational Testing Service, Princeton, NJ.

Jane Shore, Educational Testing Service, Princeton, NJ.

Steven Holtzman, Educational Testing Service, Princeton, NJ.

Hollis S. Scarborough, Haskins Laboratories, New Haven, CT & Kennedy-Krieger Institute, Baltimore, MD.

References

  • Baer J, Kutner M, Sabatini J. Basic reading skills and the literacy of America’s least literate adults: Results from the 2003 National Assessment of Adult Literacy (NAAL) supplemental studies (NCES 2009-481) Washington, DC: U. S. Department of Education, Institute for Education Sciences, National Center for Education Statistics; 2009.
  • Bell LC, Perfetti CA. Reading skill: Some adult comparisons. Journal of Educational Psychology. 1994;86:244–255.
  • Bruck M. Word-recognition skills of adults with childhood diagnoses of dyslexia. Developmental Psychology. 1990;26:439–454.
  • Bruck M. Persistence of dyslexics' phonological awareness deficits. Developmental Psychology. 1992;28:874–886.
  • Bruck M. Component spelling skills of college students with childhood diagnoses of dyslexia. Learning Disabilities Quarterly. 1993;16:171–184.
  • Byrne B, Ledez J. Phonological awareness in reading-disabled adults. Australian Journal of Psychology. 1983;35:185–197.
  • Center for Applied Linguistics. BEST Plus. Washington, DC: Author; 2001.
  • Comings J, Parella A, Soricone L. Persistence among adult basic education students in pre-GED classes (NCSALL Report No. 12) Cambridge, MA: National Center for the Study of Adult Learning and Literacy, Harvard Graduate School of Education; 1999.
  • CTB McGraw-Hill. Tests of Adult Basic Education 9 & 10. Monterey, CA: Author; 2003.
  • Dirkx JM, Jha LR. Completion and attrition in adult basic education: A test of two pragmatic prediction models. Adult Education Quarterly. 1994;45:269–285.
  • Engelmann S. Corrective Reading Program. Columbus, OH: SRA/McGraw-Hill; 1999.
  • Fowler AE, Scarborough HS. Should reading-disabled adults be distinguished from other adults seeking literacy instruction? (Technical Report TR93-7) Philadelphia: University of Pennsylvania, National Center on Adult Literacy; 1993.
  • Goodglass H, Kaplan E. Boston Diagnostic Aphasia Examination (BDAE) Philadelphia: Lea and Febiger; 1983.
  • Greenberg D, Ehri LC, Perin D. Are word-reading processes the same or different in adult literacy students and their -fifth graders matched for reading level? Journal of Educational Psychology. 1997;89:262–275.
  • Greenberg D, Ehri LC, Perin D. Do adult literacy students make the same word-reading and spelling errors as children matched for word-reading age? Scientific Studies of Reading. 2002;6:221–243.
  • Greenberg D, Fredrick LD, Hughes TA, Bunting CJ. Implementation issues in a reading program for low reading adults. Journal of Adolescent & Adult Literacy. 2002;45:626–632.
  • Kirsch I, Braun H, Yamamoto K, Sum A. America's perfect storm: Three forces changing our nation's future. Princeton, NJ: Educational Testing Service; 2007.
  • Kruidenier J. Research-based principles for adult basic education reading instruction. Washington, DC: National Institute for Literacy; 2002.
  • Kutner M, Greenberg E, Jin Y, Boyle B, Hsu Y, Paulsen C. Literary in everyday life: Results from the 2003 National Assessment of Adult Literacy (NCES 2006-477) Washington, DC: U. S. Department of Education, Institute for Education Sciences, National Center for Education Statistics; 2006.
  • Lukatela K, Carello C, Shankweiler D, Liberman IY. Phonological awareness in illiterates: Observations from Serbo-Croatian. Applied Psycholinguistics. 1995;16:463–487.
  • MacArthur C, Konold TR, Glutting JJ, Alamprese JA. Reading components skills of learners in adult basic education. Journal of Learning Disabilities. 2010;43:108–121. [PMC free article] [PubMed]
  • McShane S. Applying research in reading instruction for adults: First steps for teachers. Washington, DC: National Center for Family Literacy, National Institute for Literacy; 2005.
  • Mellard DF, Fall E, Woods KL. A path analysis of reading comprehension for adults with low literacy. Journal of Learning Disabilities. 2010;43:154–165. [PMC free article] [PubMed]
  • Miller B, MCardle P, Hernandez R. Advances and remaining challenges in adult literacy research. Journal of Learning Disabilities. 2010;43:101–107. [PMC free article] [PubMed]
  • Morris RD, Lovett MW, Wolf M, Sevcik RA, Steinbach KA, Frijters JC, Shapiro MB. Multiple-component remediation for developmental reading disabilities: IQ, socioeconomic status, and race as factors in remedial outcome. Journal of Learning Disabilities. (in press). [PubMed]
  • Nanda AO, Greenberg D, Morris R. Modeling child-based theoretical reading constructs with struggling adult readers. Journal of Learning Disabilities. 2010;43:139–153. [PMC free article] [PubMed]
  • National Institute of Child Health and Human Development. Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769) Washington, DC: U. S. Department of Education; 2000.
  • O’Shea LJ, Sindelar PT, O’Shea DJ. The effects of repeated readings and attentional cues on reading fluency and comprehension. Journal of Reading Behavior. 1985;17:129–142.
  • Perfetti CA, Marron MA. Learning to read: Literacy acquisition by children and adults (Report No. TR95-07) Philadelphia, PA: National Center on Adult Literacy; 1995.
  • Pratt AC, Brady SB. Relation of phonological awareness to reading disability. Journal of Educational Psychology. 1988;80:319–323.
  • Rasinski TV, Padak ND, Linek WL, Sturtevant E. Effects of fluency development on urban second-grade readers. Journal of Educational Research. 1994;87:158–165.
  • Read C, Ruyter L. Reading and spelling skills in adults of low literacy. Remedial and Special Education. 1985;6:43–52.
  • Sabatini JP. Efficiency in word reading of adults: Ability group comparisons. Scientific Studies of Reading. 2002;6:267–298.
  • Sabatini JP. Word reading processes in adult learners. In: Assink EMH, Sandra D, editors. Reading complex words: Cross-language studies. London: Kluwer Academic; 2003. pp. 265–294.
  • Sabatini JP, Sawaki Y, Shore J, Scarborough H. Relationships among reading skills of low-literate adults. Journal of Learning Disabilities. 2010;43:122–138. [PMC free article] [PubMed]
  • Salthouse TA. Working memory as a processing resource in cognitive aging. Developmental Review. 1990;10:101–124.
  • Salthouse TA. Theoretical perspectives on cognitive aging. Hillsdale, NJ: Lawrence Erlbaum Associates; 1991.
  • Shany MT, Biemiller A. Assisted reading practice: Effects on performance for poor readers in grades 3 and 4. Reading Research Quarterly. 1995;30:382–395.
  • Shore J. Adult Guided Repeated Reading Program lesson scripts, template and readings. 2003. Unpublished intervention materials.
  • Shore J, McNeil ANK, Sabatini J, Scarborough H. Creating and implementing an effective evidence-based fluency program for adult literacy learners. Paper presented at the Annual meeting for the American Educational Research Association; New York, NY. 2008. Mar,
  • Swanson HL. Assessment of adults with learning disabilities. In: Taymans JM, editor. Learning to achieve: A review of the research literature on serving adults with disabilities. Washington DC: National Institute for Literacy; 2009. pp. 15–72.
  • Tamassia C, Lennon M, Yamamoto K, Kirsch IS. Adult literacy in America: A first look at results from the Adult Education Program and Learner Surveys. Princeton, NJ: Educational Testing Service; 2007.
  • Torgesen JK, Wagner RK, Rashotte CA. Tests of Word Reading Efficiency (TOWRE) Austin, TX: Pro-Ed; 1999.
  • Venezky RL, Bristow PS, Sabatini JP. Measuring change in adult literacy programs: Enduring issues and a few answers. Educational Assessment. 1994;2:101–131.
  • Venezky RL, Oney B, Sabatini JP, Jain R. Teaching adults to read and write: A research synthesis. Washington, DC: U. S. Department of Education, Planning and Evaluation Service; 1998.
  • Wilkinson G. Wide Range Achievement Test 3 (WRAT-3) Wilmington, DE: Wide Range, Inc.; 1993.
  • Wolf M, Barzillai M, Gottwald S, Miller L, Spencer K, Norton E, Lovett M, Morris R. The RAVE-O intervention: Connecting neuroscience to the classroom. Mind, Brain, and Education. 2009;3:84–93.
  • Wolf M, Miller L, Donnelly K. Retrieval, Automaticity, Vocabulary Elaboration, Orthography (RAVE-O): A Comprehensive, Fluency-Based Reading Intervention Program. Journal of Learning Disabilities. 2000;33:375–386. [PubMed]
  • Woodcock RW, McGrew KS, Mather N. Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside Publishing; 2001.

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...