• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Am Econ J Appl Econ. Author manuscript; available in PMC Jul 20, 2011.
Published in final edited form as:
Am Econ J Appl Econ. Jul 1, 2010; 2(3): 129–157.
doi:  10.1257/app.2.3.129
PMCID: PMC3140225
NIHMSID: NIHMS265716

Why Have College Completion Rates Declined? An Analysis of Changing Student Preparation and Collegiate Resources

Abstract

Rising college enrollment over the last quarter century has not been met with a proportional increase in college completion. Comparing the high school classes of 1972 and 1992, we show declines in college completion rates have been most pronounced for men who first enroll in less selective public universities and community colleges. We decompose the decline into the components due to changes in preparedness of entering students and due to changes in collegiate characteristics, including type of institution and resources per student. While both factors play some role, the supply-side characteristics are most important in explaining changes in college completion. (JEL I23)

Partly as a consequence of the substantial increase in the college wage premium since 1980, a much higher proportion of high school graduates enter college today than did so a quarter century ago. However, the rise in the proportion of high school graduates who attend college has not been met by a commensurate increase in the fraction who become college graduates. Among students who enter college, the share of those who complete college is lower today than in the 1970s. This trend is illustrated in Figure 1, which shows that among US born 25-year-olds the likelihood of obtaining a bachelor’s degree (BA), conditional on some college participation, dropped from over 45 percent in 1970 to under 40 percent in 1990. In this paper, we explore what accounts for this limited expansion in the supply of college educated workers to the labor force, despite the relatively high level of the college wage premium.

FIGURE 1
Trends in the Ratio of BA Recipients to Those with Some College or More among 25-Year-Olds

We analyze data from the National Longitudinal Study of the High School Class of 1972 (NLS72) and the National Educational Longitudinal Study of 1988 (NELS:88) to compare the class of 1972 and class of 1992 high school cohorts and show eight-year college completion rates declined nationally across these cohorts by 4.6 percentage points, from 50.5 percent to 45.9 percent. These changes were not uniform across different sectors of higher education, however. Students matriculating at lower ranked public universities and community colleges experienced the full decline in completion rates. In contrast, completion rates increased at public universities ranked in the top 50 as well as at private colleges and universities. Despite greater increases in college-going among women, the drop in completion rates has been almost entirely concentrated among men.

One potentially compelling explanation for the observed drop in completion rates is a compositional shift in the preparation of students attending college. Perhaps due to increasing returns to a BA degree, more students with weaker preparation are being induced to attend college in more recent cohorts than were in earlier cohorts. The resulting compositional shift could lead to a lower proportion of students who graduate from college, as the more weakly prepared students drop out. If such a change in the composition of college students contributes to the decline in college completion, we would expect completion rates in those sectors likely to draw the more marginal students—such as two-year colleges and less-selective public schools—to decline the most.

However, in addition to a compositional shift among the college-bound, increased college enrollment generated shifts in the kind of colleges students attend and in the resources available within those institutions. We document a marked reduction in institutional resources in the sectors that experienced declining completion rates. Reductions in resources per student at the institutional level may limit course offerings and student support and can lower the rate at which students are able to complete the requirements for a baccalaureate degree. Such institutional level declines in resources per student can be caused either by reductions in state funding or increases in the number of students a college services at a given budget level. In a higher education market dominated by public and nonprofit institutions with different levels of selectivity, a given demand shift or reduction in state funding likely will lead to greater stratification of resources across the sectors of higher education. To the extent that institutional resources influence students’ likelihood of college success, these changes could contribute to the national and within-sector trends in completion rates that we document.

We decompose the observed changes in completion rates into the component due to compositional changes in the preparation of entering students and into the component due to shifts in college characteristics. While both factors play a role, we find school characteristics are more important, particularly among four-year schools. This decomposition is complicated substantially by our use of a logit model to handle the binary nature of college completion. We construct counterfactual completion rate measures by simulating what completion rates would be in the NELS:88 sample if the distribution of high school test scores, college student-teacher ratios, or initial college type were shifted in a rank-order neutral manner to be identical to the distributions in the NLS72 sample. These simulations allow us to isolate the contribution of each factor to the total observed change in completion rates.

We find that shifts in the preparation of students entering college (as measured by math test scores) account for about one-third of the observed decline in completion rates; decreases in institutional resources (as measured by increases in college student-faculty ratios) account for about one-quarter of the observed completion rate decline; and sectoral shifts in where students first attend college account for three-quarters of the observed national decline. Together, these factors can explain more than the total reduction in completion rates that have occurred since the 1970s. Other student characteristics, such as parental educational attainment, have shifted in ways that would serve to increase completion rates across surveys.

For students beginning college at a lower ranked public university, declines in academic preparation were too small to explain more than a trivial portion of the completion rate drop, but increases in student-faculty ratios account for over three-quarters of the total observed completion rate reduction. For community college students, conventionally measured academic resources—either expenditures per student or student-faculty ratios—explain little of the completion rate decline, while declines in college preparation account for almost 90 percent of the total drop in completion rates.

The key finding of this analysis is that the supply-side of higher education plays an important role in explaining changes in student outcomes. The higher education literature has focused on how student preparation for college translates into college success. Our analysis suggests that, at least for changing completion rates, student preparation is only a partial explanation. Characteristics of the supply-side of the market have a substantial influence on student success in college.

In Section I of this paper, we describe the data we use and show the changes in completion rates found in the data, both nationally and across sectors of higher education. In Section II, we outline a theoretical framework for understanding why these observed changes have occurred and describe our empirical framework for distinguishing between these potential explanations. In Section III, we present the results from our decomposition analysis, and in Section IV we offer our conclusions.

I. Data and Descriptive Statistics

A. Measuring Completion Rates

The data for our analysis come from NLS72 and NELS:88. We use these datasets rather than larger datasets with more cohorts of observation, such as the US census, because the NLS72 and NELS:88 contain information on which college each student attended, the timing of attendance, and graduation outcomes that are necessary to accurately measure cohort-specific completion rates.1 These surveys draw from nationally representative cohorts of high school and middle school students, respectively, and track the progress of students longitudinally through collegiate and early employment experiences. We define the completion rate as the proportion of students who attend college within two years of cohort high school graduation2 and obtain a BA within eight years of cohort high school graduation.3

B. Trends in College Enrollment and Completion Rates

The longitudinal surveys show that college enrollment rates and college completion rates have not moved in the same direction in recent decades in the United States. Between the two cohorts there was a substantial increase in college participation, which occurred against the background of little appreciable change in high school graduation rates or measured secondary school achievement.4

As illustrated in Table 1, the overall participation rate increased from 48.4 percent to 70.7 percent. Although the size of the high school graduation cohort was smaller in 1992 than in 1972, the increased attendance rate led to growth in the number of students attending college, from 1.4 million to 1.8 million. By gender, the increase is greater for women, rising from 46.5 percent to 73.5 percent, than it is for men, rising from 50.4 percent to 68.0 percent.

TABLE 1
Changes over Time in Type of First Institution for All Attendees and For Those Obtaining a BA within Eight Years of Cohort High School Graduation

To capture the differentiation among collegiate experiences, we split post-secondary institutions into five sectors (which are reflected in Table 1): non-top 50 public four-year schools, top 50 public four-year schools, less selective private four-year schools, highly selective private four-year schools, and community colleges.5 Table 1 illustrates that the overall increase in college participation has not been divided evenly among different types of institutions in the US postsecondary market. The largest change over this time has been the 12.5 percentage point increase in the likelihood of starting at a community college.6 These shifts were larger for men than for women.

Overall, increases in the rate of college enrollment have been accompanied by decreases in completion rates, as shown in panel A of Table 2. Similar to the enrollment shifts, these changes were not uniform across sectors. The BA completion rate among those starting at public two-year institutions slipped to 17.6 percent from 20.2 percent. Similarly, the completion rate fell by 4.9 percentage points between the NLS72 and NELS:88 cohorts among students beginning college in public non-top 50 institutions. In contrast, completion rates increased in the top 50 public schools as well as in private schools. The divergence in completion rates by type of institution suggests that national trends fail to record the stratification in outcomes that occurred both within the four-year sector and between the four-year sector and community colleges since the 1970s. The erosion in completion rates within the non-top 50 public sector suggests that the national trends are not being driven solely by the enrollment shift to community colleges, where completion rates are lower and many students may not intend to earn a BA.

TABLE 2
Completion Rates (in Percent) within Eight Years of Cohort High School Graduation for the Full Sample and by First Institution

While including community college students in our aggregate measure magnifies the total reduction in completion rates, we believe this inclusion is appropriate for two reasons. First, although it is possible that some students enter community colleges for sub-baccalaureate vocational training, the majority (69.9 percent) of community college entrants in the NELS:88 cohort intended to complete a BA.7 Second, because attendance at a community college provides the important option to continue to the completion of a four-year degree, these students are significant in the determination of cohort college completion rates.

Panel B of Table 2 contains completion rate changes across cohorts separately by gender. In general, men’s completion rates declined substantially relative to women’s, with the rate for males dropping by 8.5 percentage points and the rate for women declining a not statistically significant 0.6 percentage points. Similar trends are apparent across sectors. For men attending public non-top 50 universities and community colleges, completion rates dropped substantially. For women, while completion rates fell slightly in the non-top 50 public and community college sectors, rates rose dramatically in the top 50 public and private sectors.

C. Measuring Student Attributes

One of the major advantages of the NLS72 and NELS:88 data is the rich set of background information available about respondents. The student attributes we use throughout this analysis are high school math test percentile,8 father’s education level, mother’s education level, real parental income levels, gender, and race. Our analysis uses math test percentiles as our primary indicators of college preparation because, under reasonable assumptions, we are able to compare these measures over time and among students attending different high schools.9 Because there has been little change in the overall level of test scores on the nationally representative National Assessment of Educational Progress (NAEP) over our period of observation, we argue that the academic preparation of high school graduates did not change over our period of observation. Conditional on math test percentiles, we find other tests, such as reading, have no predictive power for completion rates. Decomposition results with the inclusion of reading test percentiles in the analysis are unchanged from those reported below, and we therefore use only math test percentiles for parsimony in the empirical analysis.

Mother’s and father’s education is split into five levels: less than high school, high school diploma (including a GED), some college, college graduate, and any postcollegiate attainment. For the measurement of family income, we are interested in assessing parents’ ability to finance college, so the variable of interest is the real income level, not one’s place in the income distribution. We align income in the two surveys using the Consumer Price Index (CPI) into six comparable income blocks representing responses to categorical questions about parental income.

The NLS72 and NELS:88 datasets contain a significant amount of missing information on test scores, parental education, and parental income brought about by item nonresponse. While a small share of observations is missing all of these variables (in NLS72 and NELS:88, respectively, 0.5 percent and 0.6 percent have no information on any of these variables), a substantial number of cases are missing either test scores, parental education or parental income.10 We use multiple imputation methods (Donald B. Rubin 1987) on the sample of all high school graduates to impute missing values using other observable characteristics of each individual.11 The Technical Appendix to this paper in Web Appendix A contains detailed information on the construction of our dataset.

Table 3 presents the changes in background characteristics and academic preparation of college attendees and graduates across the NLS72 and NELS:88 surveys. Notably, there was a sizable shift in the proportion of students entering college from lower on the high school test score distribution. Average math test percentiles among those enrolling in college dropped from 62.5 to 58.0 across cohorts, and the percent of college attendees from the highest math test quartile dropped from 40.7 to 32.8, while the percent from the lowest math test quartile increased from 11.2 to 15.6. However, these declines occurred only in the two-year sector. Math test percentiles remained constant or increased in all four year sectors across the two surveys. This fact foreshadows a key finding of this paper, that changing student college preparation, as measured by math test percentiles, cannot account for the decline in completion rates in the public non-top 50 sector.

TABLE 3
Means of Selected NLS72 and NELS:88 Variables

Figure 2 shows the likelihood of college entry and college completion for high school graduates, as well as the completion rate conditional on college entry. The figure clearly indicates that the likelihood of attending college rose across the board. In 1972, there were many high school graduates who appear to have been well qualified to attend college, but who did not enroll. By 1992, this was a rarer phenomenon. At the same time, the figure makes clear that the greatest enrollment shifts occurred for those that appear less well prepared, and those with relatively low academic achievement were unlikely to complete the BA degree in either cohort. As a result, the average math test percentile actually rises among BA recipients across the two cohorts, from 70.7 to 71.7. In the bottom quartile of the test score distribution, the likelihood of attending college increases from 21.7 percent to 44.0 percent, which is consistent with a larger percentage of less-prepared students attending college in the later cohort in order to take advantage of the rising returns to education. However, among this group, only 5.6 percent in the initial period of observation receive a BA, and this percent falls further to 5.0 for the later cohort. Focusing on college attendees, the likelihood of completing a BA declined from 25.8 percent to 11.4 percent across cohorts for those in the bottom quartile of math test scores. The strong link between our measure of precollegiate achievement and the likelihood of college completion shown in Figure 2 suggests that changes in the distribution of preparation among college entrants is a potentially important factor in understanding the change in the aggregate college completion rate.

FIGURE 2
Collegiate Attainment by Precollegiate Achievement

The gender differences in college completion shown in panel B of Table 2 correspond to a dramatic decline in the skill gap between men and women across cohorts, as measured by precollegiate math test percentiles. Figure 3 presents cumulative distributions of math test percentiles for the full sample of college attendees and separately by college sector. Overall, the distribution of math percentiles observed for women shifted toward the distribution observed for men. While this convergence occurred in all sectors, both the male and female distributions shifted upward dramatically in the selective private sector, implying a larger overall upward shift for women due to the pre-existing differences in the NLS72 survey. These distributional shifts can be attributed to the fact that the math skill gap narrowed between men and women over this period and the fact that in the 1970s, many high-skilled women did not attend college (Claudia Goldin, Lawrence F. Katz, and Ilyana Kuziemko 2006). By the 1990s, this attendance gap had disappeared. However, the fact that more high-skilled women were drawn into higher education across cohorts suggests college preparation can be only a partial explanation for the aggregate completion rate decline shown in the previous section.

FIGURE 3
Cumulative Math Test Percentile Distributions by Initial School Type and Gender

The evidence presented in Table 3 and Figures 2 and and33 suggest that reductions in college preparation likely play a limited role in explaining declining college completion rates, particularly for the four-year, non-top 50 public sector in which math test percentiles remained constant across cohorts. In addition, shifts in other precollegiate characteristics favor increased college completion. For both college attendees and graduates, Table 3 shows parental education became more favorable.12 Echoing the general increase in educational attainment during the postwar period, the proportion of college attendees whose father (mother) had at least a BA increased by 7.8 percentage points (13.5 percentage points) for all college attendees. Such shifts implicitly go in the “wrong direction” to explain the observed changes in completion rates.

D. Trends in Institutional Resources

There has been a sizable shift in college-level resources that has occurred differentially across sectors of higher education. Overall, resources per student either increased or held constant on a number of widely reported scales from the 1970s to the 1990s. To illustrate, constant dollar current expenditures per student at public colleges and universities have risen from $14,610 in 1970–1971, to $17,606 in 1990–1991, to $22,559 in 2000–2001 (Thomas D. Snyder, Alexandra G. Tan, and Charlene M. Hoffman 2006, table 339). Such measures miss two fundamental changes occurring over this period. First, the stratification in resources across institutions increased, with large increases in resources at private and selective public institutions combined with stagnation and decline in resources at other institutions. 13 Second, changes in spending per student combine changes in the price of educational inputs with changes in quantities.14

Table 4 presents the distribution of student-faculty ratios by type of institution, calculated from the HEGIS/IPEDS institutional surveys, along with the median level of instructional expenditures per student. While there were large cross-sector differences that existed in 1972, student-faculty ratios became much larger in the non-top 50 public and two-year sectors and increased less significantly or decreased in the private sector and the top 50 public universities over time. For example, while mean student-faculty ratios fell among the top 50 public sector institutions and the highly selective private institutions, they increased by 14 percent in the public non-top 50 sector and by 40 percent in community colleges. Even larger relative increases occurred at below-median institutions. While median real instructional expenditures decreased overall, from $4,716 per student to $4,339 per student, this aggregate comprises an increase in resources within selective private universities and decreases at community colleges and public universities outside the top tier. Our analysis below focuses on student-faculty ratios as our measure of institutional resources because we believe that they more accurately reflect resources available to students.

TABLE 4
Undergraduate Student-Faculty Ratios and Expenditures per Student by Initial School Type

It is natural to ask whether the shift toward attendance at community colleges across cohorts reflects adjustment to changing student characteristics or plausibly exogenous changes in the supply-side of the market. We estimate a multinomial logit model of initial school choice on student background characteristics from NELS:88 and then predict initial sector of attendance using these coefficients and the NLS72 data. These predictions represent the counterfactual distribution of students by institution that would have been expected to occur had the later cohort had the same distribution of observable characteristics as the earlier cohort. Our calculations show changes in student observables explain virtually none of the observed cross-cohort shifts in initial school choice. These simulations suggest changes in the supply-side of the market are important in explaining shifts in which students entered the postsecondary system. Because the two-year sector is relatively elastic in supply response (Bound and Turner 2007), enrollment demand is most likely to be accommodated at these institutions in periods of expansion.

II. Empirical Strategy to Decompose the Changes in Completion Rates

The theoretical underpinnings of our empirical analysis are based on predicted student responses to increases in the returns to a college education, which is one of the dominant trends in higher education over the past 30 years. As the returns to a college degree increase, more students are induced to attend college and more of the students who already would have attended college will complete in order to realize the return on their investment. Thus, holding student skills and college quality constant, an increase in the return to college should increase the likelihood of college completion. However, many of the newer students induced to attend college will be less prepared for collegiate success, and these marginal students are likely to have a lower probability of completing college than the inframarginal students. These two shifts have opposite effects on completion rates. Which effect will dominate is an empirical question.

Changes in the supply side of the market, including which sector of higher education college students attend and the resources per student at these colleges, also can affect collegiate attainment. Increasing college attendance itself may further change the resources to which students are exposed while enrolled. Because the higher education market is dominated by public colleges and universities with budget systems that do not adjust fully to demand increases, higher enrollment leads to a reduction in resources per student, with increased scarcity (selectivity) at those institutions that are the most resource-intensive. Nevertheless, even in the presence of reductions in expected resources per student and a reduced likelihood of attending a top-tier institution resulting from a general demand increase for postsecondary education, uncertainty about actual collegiate resources and likelihood of collegiate success can induce students to enroll in an institution with low resources in order to take advantage of high potential returns (i.e., the option value of schooling),15 only to find out that it is difficult to enroll in required classes and that student support services are limited. Declining student resources over time, particularly in non-top 50 public universities and community colleges, therefore may play a role in explaining the trends in completion rates we observe in the data.

It is important to emphasize that the demand-side and supply-side explanations described above are not mutually exclusive: less-prepared students sort into the most elastic sectors of higher education, which tend to have the fewest resources. In essence, increased demand for college crowds more students (and more less-prepared students) into community colleges and non-top 50 public universities. Demand increases therefore not only lower the resources per student at these institutions, but cause higher dispersion in resources across the sectors of higher education. The implication of this sorting process is that one should observe the largest declines in completion rates in the most elastic sectors, which is indeed the pattern prevalent in the data.

Because changes in student preparation for college and institutional resources are closely linked through the returns to education and the college admissions system, it is not clear from examining the aggregate data how important individual demand-side or supply-side factors are in explaining changing college completion rates. Our empirical objective in the paper is to disentangle the individual contribution of these factors. We seek to decompose the total change into the part due to shifts in student preparation and the part due to college-level factors. In order to isolate the independent effects of student preparation and college resource changes on changes in completion rates, we run logit models of the probability of completion on student math test percentiles, collegiate student-faculty ratios, student demographic characteristics, and initial college-type fixed effects.

The central difficulty in undertaking the proposed decompositions stems from the nonlinearity of the logistic function. In a linear framework, the decomposition is straightforward. The change in the mean of an explanatory variable multiplied by the estimated coefficient on that variable from the outcome regression represents the partial effect that the change in this characteristic has on the total change in the outcome. For example, if one estimated our completion equation using a linear probability model, the effect of changes in any observable (xj) on the observed change in completion rates could be calculated simply by multiplying the estimated partial effect of xj on completion rates by the change in the mean of xj across surveys.

In a nonlinear model, such as a logit, the exercise becomes both conceptually and computationally more difficult because the estimated partial effects vary across the sample. For individuals who are estimated to be either very likely or very unlikely to finish college, changes in xj will have little effect on the changes in the probability of completion. On the other hand, for individuals whom our estimates suggest have a roughly 50 percent chance of finishing college, changes in the observables will have a potentially large effect on the estimated likelihood of college completion. We therefore cannot simply multiply the estimated marginal effect by the difference in means across samples, as the simulated effects of a discrete change in an explanatory variable will depend on what part of the distribution shifted.

Simulating the change in all of the explanatory variables is straightforward; one method is to use estimates from the NELS:88 data to simulate predicted probabilities using the NLS72 data. The difference between the average of these predicted probabilities and the sample fraction that complete college in the NELS:88 data represents an estimate of the effect of compositional changes of all our observables on completion rates.16 Note that nonparametric reweighting using the characteristics of students from the NLS72 survey would produce a nearly identical counterfactual completion rate in which the proportion of students with a given characteristic or a given set of characteristics has not changed between the two surveys.17 We employ the former method to simulate the effect of a change in all observables on the change in completion rates, but we obtain similar results when we use nonparametric reweighting techniques.

Beyond measuring the total effect of compositional changes, we want to estimate the effect of changes in individual explanatory variables. To this end, we want to predict the completion rate under the counterfactual assumption that test scores followed the distribution of the early cohort while other covariates maintained the later distribution. Sergio Firpo, Fortin, and Lemieux (2007) note the general challenge in dividing the composition effect into the role of each covariate for distributional measures beyond the mean. Focused on distributional measures in the context of the structure of earnings, the authors develop a general method for doing this decomposition using influence functions. In our context of a dichotomous dependent variable, we pursue an approach focused on estimation of the counterfactual for any variable using a matching estimator.

It will be easiest to explain our approach in the context of a specific example. We are interested in using our estimates to simulate the effect that a change in the distribution of college preparation, proxied by the change in math test percentiles among college goers, had on completion rates. To do this simulation, we construct counterfactual variables by assigning to each observation in the NELS:88 sample a math test percentile from the NLS72 sample that corresponds to the same relative rank. The individual with the highest percentile score in NELS:88 is assigned the top value from NLS72, the second-ranked in each survey are matched, and so on (with ties broken randomly). Done without replacement, this matching algorithm reproduces the NLS72 math test percentile distribution in the NELS:88 sample, holding relative ranking constant across the surveys. This method ensures we are not randomly assigning high (low) math test values to individuals who would be predicted to have low (high) math test scores based on the other characteristics we observe about respondents. We then use the parameters from our estimated logit model of completion on the NELS:88 sample and calculate counterfactual completion rates in which math test scores have changed in a rank-neutral manner, but other characteristics have remained the same. While the above example focuses on math test distribution changes, it is straightforward to use this methodology to generate counterfactual completion rates for a shift in any of our independent variables.18

The validity of our counterfactual calculations (e.g., what would the college completion rate for those attending college in the 1990s have been had they been as academically prepared for college as those who attended in the 1970s?) depends crucially on the cross-sectional association between background characteristics and college outcomes reflecting a causal relationship not seriously influenced by confounding factors.19 For example, we simulate completion rates under a counterfactual distribution of test scores. For this simulation to accurately represent the counterfactual, it must be the case that the cross-sectional relationship between test scores and the likelihood of college completion reflects the impact of precollegiate academic preparation on this outcome. Regardless of whether the simulation calculation produces the “true” counterfactual, the results present a clear accounting framework for assessing the descriptive impact of the change in the composition of students and the institutions they attend on collegiate attainment.

A related identification issue associated with our decomposition analysis is that math test percentiles and student-faculty ratios must be accurate proxies of student preparation and institutional resources, respectively. A particular concern is that both test scores and student-faculty measures are imperfect proxies for the constructs in which we are interested. While the general effect of such errors in measurement is to bias downward the effects estimated in the simulations, we are unable to assess the relative adequacy of these measures. Nevertheless, in the case where the distribution of test scores does not change across cohorts, while the distribution of resources changes markedly, we can be confident that changes in preparation are not central in the explanation. Moreover, within sector, the correlation between test scores and student-faculty measures is low. This supports the interpretation that institutional resource effects are not simply reflecting the effects of unmeasured “ability” generated by the correlation with the residual part of academic preparation not captured by our math test measure. To further address this concern, we run state-level regressions of college completion rates from the US Census on the size of the college-age population. As argued by Bound and Turner (2007), the college-age population serves as an instrument for collegiate resources because demand shocks are unlikely to be fully accounted for by the public state budgeting system. We use this instrument in order to determine whether completion rates within states vary systematically with these demand shocks in order to give further evidence of the importance of collegiate resources in determining college completion rates.

III. Empirical Analysis of Changes in Completion Rates

A. Results from Completion Logits

The coefficients from the completion logits estimated for the NLS72 sample and the NELS:88 sample are shown in Table 5 for the full sample and separately by initial school type.20 Overall, the coefficients shown in Table 5 have the expected signs. Higher student-faculty ratios and lower math test percentiles lower the likelihood of completion.21 This pattern is evident within sectors as well. We find consistent evidence that increases in student-faculty ratios reduce completion, both for the full sample and within sectors. However, similar to Stange (2009), we find little evidence that measured collegiate resources affect the likelihood of completion in the community college sector. This result could be due to the fact that within-sector variation in institutional resources matters little in determining the likelihood of obtaining a BA for community college students. More plausibly, perhaps, our resource measures do a particularly poor job measuring resources devoted to the two-year college student potentially bound for a four-year school. Some of the most resource-intensive programs at community colleges are likely to be vocational programs of short duration. In addition, student-faculty ratios are a noisier resource measure in community colleges than in the other sectors of higher education because of the greater incidence of students attending less than full-time and faculty employed on an adjunct basis.

TABLE 5
Completion Logits by Survey and Initial School Type

The initial school type indicators in the first two columns of Table 5 show that even conditional on student-faculty ratios, there is a significantly higher likelihood of completion at top 50 public and selective private schools and a significantly lower likelihood of completion at community colleges.22 These results suggest there are unobserved aspects of college quality across sectors not being proxied for by our institutional resource measure that are important in explaining college completion.

B. Simulation Results for the Full Sample

Results from the decomposition analysis are shown in Table 6.23 The observed cohort completion rates are presented in the first two rows, followed by the difference in the third row, and the difference between the observed NELS:88 completion rate and the counterfactual completion rate calculated using NELS:88 completion logit coefficients and all observables from the 1972 cohort are shown in the fourth row. Overall, the covariates in our analysis explain the entire decline in the completion rate.

TABLE 6
Logit Simulation Decompositions based on NELS:88 Estimates of College Completion by Type of Institution

In order to distinguish the effects of changes in variables like college preparation from the effects of changes in supply-side variables like student-faculty ratios, we use the decomposition methodology discussed in Section II. The subsequent rows of Table 6 show the predicted change in completion rates based on these univariate simulations. For the change due to math test percentiles, we calculate the difference between the observed NELS:88 completion rate and the simulated completion rate that would have prevailed if the observations in NELS:88 were to possess the same math test percentile distribution as the NLS72 cohort, holding all other covariates constant. This simulation leads to a completion rate 1.6 percentage points higher than the observed completion rate in the NELS:88 sample. At the same time, shifting other individual characteristics (primarily parents’ education) back to their 1972 levels would lower completion rates by a comparable amount. Conducting the same exercise for student-faculty ratios produces a completion rate 1.1 percentage points higher than the observed NELS:88 completion rates, while changing just institutional type produces a completion rate 3.5 percentage points higher than observed. These results underscore the importance of supply-side factors in explaining completion rate declines. Shifts in where students enter the postsecondary system and changes in student-faculty ratios together explain the entire observed decline in completion rates across surveys. If we did not account for other student background characteristics shifting in ways that would suggest an increase in completion rates, observed declines in student preparation and supply-side factors would slightly over-explain the total observed decline.

C. Simulation Results by Initial School Type

Results from our decomposition analysis by initial school type also are shown in Table 6. For students beginning college at non-top 50 public universities, the changes in math test percentiles explain none of the observed completion rate decline. This result occurs despite the fact that we find a sizable positive effect of math tests on completion in this sector, as shown in Table 5. However, as Figure 3 illustrates, the math test percentiles of incoming students at non-top 50 public institutions have changed negligibly across surveys. Student preparation for college, as measured by math test percentiles, has not changed enough to explain the completion rate decline for non-top 50 ranked public university students.24 Instead, we find that the large increase in student-faculty ratios reported in Table 4 can explain 81.6 percent of the total observed drop in completion rates. At least for students in the non-top 50 public sector, supply-side shifts are the dominant explanation for why completion rates have declined across surveys.

For students at community colleges, variation in student-faculty ratios has little explanatory power, while observed declines in student preparation are notable. Reductions in the math test percentile of incoming students can explain 88 percent of the 2.5 percentage point decline in this sector. These results are consistent with the dramatic expansion of the community college sector across cohorts and the fact that the less prepared students induced to attend college in the later cohort are predominantly entering the postsecondary system through community colleges.

At private universities and the top-tier public universities, the results are quite different. Neither observable student characteristics nor institutional resource measures are particularly powerful in explaining the quite prominent increases in college completion rates. This result is not surprising—as discussed in Section II, increasing returns to education should raise completion rates unless they are coupled with resource declines or increases in attendance among less academically prepared students, neither of which occurred in these sectors.

D. Simulation Results by Gender

One of the striking results shown in panel B of Table 2 is the large difference in completion rate changes across genders. Our estimates show a cohort effect in college completion for women, with women in the later cohort completing college at much higher rates than their peers from the earlier cohort in most sectors of higher education. A compelling and plausible explanation for this shift is that labor market opportunities and the associated returns to college completion for women changed over this period, with women in the later cohort much more likely to expect extended labor force participation (Goldin, Katz, and Kuziemko 2006). These labor market changes are powerful demand-side factors affecting college completion rates.

Table 7 shows results from our decomposition analysis done separately by gender, using the same logit coefficients as in Table 6, but generating within-gender counterfactual variable distributions. These decompositions allow us to account for the importance of the differences in the change in college preparation across genders and the differences between men and women in how they sort into the sectors of higher education across cohorts separately. Furthermore, because most of the decline in completion rates has been among men, it is important to determine whether we can explain the decline among the more affected group. The results for men, shown in panel A, are similar to those in Table 6, while declines in math test percentiles can explain about one-quarter of the observed completion rate drop, it is predominantly the shift across institutions combined with higher student-faculty ratios that are responsible for the aggregate decline. Taken together, these two supply-side factors explain about 55 percent of the completion rate drop for men, and our demand-side and supply-side factors together explain about 80 percent. For the public non-top 50 sample, these factors can account for 43.8 percent of the 9.6 percentage point decline, with student-faculty ratio increases acting as the driving force behind these results. In the community college sector, it is predominantly the decline in math test percentiles that can explain the overall completion rate drop, accounting for 69 percent of the observed reduction in completion. Overall, we can explain 82 percent of the total completion rate decline with our demand and supply side factors in this sector.

TABLE 7
Logit Simulation Decompositions Based on NELS:88 Estimates of College Completion by Type of Institution and Gender

For women, as shown in panel B of Table 7, the total effect of supply-side factors and academic preparation is to overpredict the observed completion rate decline in the full sample as well as in the non-top 50 public and community college sectors. In the public top 50 and private sectors, women experienced an increase in completion rates, but none of this increase is predicted by test percentile and student-faculty ratio changes. Particularly at top-tier institutions, the collegiate attainment of women conditional on measured academic achievement improved markedly between cohorts.

Overall, the closing of the gender gap in college preparation between men and women and the increasing likelihood that men and women attend similar colleges with similar resources explains some of the gender difference in the change in completion rates. However, as implied by the dramatic change in the coefficient on the male dummy in our completion rate logits (see Table 5), most of the relative change is not explained by any of the explanatory variables in our models. Changes in women’s expectations about the future potentially explains this residual (Goldin, Katz, and Kuziemko 2006).

The increased participation of women in higher education between the 1972 and 1992 cohorts plausibly served to exacerbate supply constraints in the higher education market, particularly in the four-year sector. If not for the disproportionate increase in the enrollment of women, we expect that the distribution of men by type of initial enrollment would have reflected greater representation at four-year institutions with relatively high completion rates and lower enrollment at community colleges, as the four year sector is much less enrollment-elastic than the two-year sector. To frame this idea, we use the numbers in Tables 1 and and22 to calculate what the male completion rate would have been for the 1992 cohort had the overall enrollment shares remained constant at their 1992 levels (column 4 of Table 1), but had the gender ratios within each category stayed at their 1972 levels. This simulation serves to increase the share of men attending private colleges, in general, and selective private colleges, in particular. Holding sector completion rates constant, we calculate that male completion rates for the 1992 cohort would have been three percentage points higher under this alternative regime. Thus, relatively inelastic supply in the most resource-intensive sectors of higher education served to generate crowd-out in the distribution of enrollment choices of men, which magnified the decline in completion rates for this group.

E. State-Level Evidence of the Effect of Collegiate Resources on College Completion

Since student-faculty ratios and high school math test percentiles may be imperfect proxies for institutional resources and college preparation, respectively, we present supplemental evidence on the link between changes in collegiate resources and changes in completion rates. Because states are the governmental level of control for public universities, we exploit within-state changes in the college-age population that generate exogenous variation in the level of public higher education subsidies per student. Absence of full adjustment of public subsidies to student demand shocks means that relatively large cohorts face diluted resources per student at state-run institutions in what Bound and Turner (2007) describe as “cohort crowding.”

Using data from the 1940–1975 birth cohorts in the 2000 US census, we regress the log of the state and birth cohort share of the population with at least some college attaining a BA degree on log birth cohort size, including state and year fixed effects. The estimated coefficient on log cohort size provides a reduced form estimate of the elasticity of college completion with respect to cohort size. The basic approach parallels Bound and Turner (2007), though is distinguished in the focus on completion relative to college attendance.

Our findings, presented in Table 8, show within-state increases in cohort size of 10 percent lead to declines in the share of BA recipients among those starting college of between 1.12 percent and 1.85 percent. The separate outcomes for men and women, shown in the final two rows of Table 8, are similar in magnitude, though slightly more pronounced for men than for women. Our results support the hypothesis that increases in the number of students attempting to enroll in colleges and universities, particularly in public institutions, reduce both the rate of college entry and the rate of completion conditional on college entry. Overall, the results using the census data provide further evidence of the empirical relevance of supply-side constraints in determining completion rates.25 Taken together with the results from the logit simulations, these estimates suggest that incomplete institutional adjustments to growth in the number of students pursuing a college degree may foster increased stratification, with an increasingly smaller portion of the student body receiving an increasingly larger fraction of the resources.

TABLE 8
State-Level Estimates of the Effect of Crowding on Completion Rates, 1940–1975 Birth Cohorts Observed in the 2000 US Census

IV. Conclusion

Focusing on the inter-cohort comparison of college completion rates between the NLS72 (the high school class of 1972) and NELS:88 (the high school class of 1992) cohorts, we find that declines in precollegiate preparation and changes in the distribution of supply-side options in the higher education market—reflecting both institutional type and resources within institutions—explain the substantial decline in college completion rates, particularly among men. By examining the type of college at which individuals begin their postsecondary careers, we show the decrease in degree completion is largely concentrated among students beginning at non-top 50 public universities and two-year colleges. As such, the progression from college enrollment to BA receipt over the last three decades has become much more stratified as the differences in resources per student have grown both between sectors and within sectors.

We find increased enrollment among students from lower in the precollegiate math test score distribution, which is occurring solely at two-year schools, can explain about 33 percent of the overall drop in college completion rates. The drop in completion rates has not occurred equally across gender groups—we find large decreases among males but not females. That the college enrollment rate has increased more for women than for men, yet declines in college completion have been smaller for women than men, suggests changes in completion rates cannot be due solely to increases in enrollment by marginal students.

Our analysis focuses on the supply-side of higher education, which has not received much attention in previous literature. Changes in resources per student, as measured by student-faculty ratios at the institution where a student begins college, account for about one-fourth of the observed aggregate decline in the college completion rate. Similarly, we find the shift in the distribution of students’ initial college type, largely the shift toward community colleges, explains roughly three-fourths of the observed decrease in completion rates.

We argue one reason for the importance of the initial institution type in explaining reductions in completion probabilities is the increased stratification across sectors that has occurred over the time period covered by our analysis. This increased stratification in resources is likely a response to demand shocks combined with increased market integration that has produced more differentiation, leading to declines in resources per students outside the selective public and private universities where rationing occurs through selective admissions. Thus, while “access” or initial college enrollment has increased dramatically over the past three decades, many of the new students drawn to higher education (likely to take advantage of the increased returns to a BA) are attending institutions with fewer resources and are not graduating. The mechanisms by which this is occurring, however, deserve more attention in future research.

That decreases in college completion rates are concentrated among students attending public colleges and universities outside the most selective few suggests a need for more attention to the budgets of these institutions from state appropriations and tuition revenues. These institutions may face tradeoffs between fulfilling an open access mission by increasing enrollment at low tuition with reduced resources per student and either raising tuition, which may reduce “access,” or limiting enrollment in order to increase resources per student. In drawing attention to changes in the composition of students as well as the supply-side of the market for higher education as explanations for declining completion rates, this analysis suggests that improving the understanding of the factors determining the level of collegiate attainment has substantial implications for the expected trend in the college wage premium and long-run economic growth.

Acknowledgments

We would like to thank Paul Courant, Arline Geronimus, Harry Holzer, Caroline Hoxby, Tom Kane, and Jeff Smith for comments on an earlier draft of this paper, and we are grateful to Charlie Brown, John DiNardo, Justin McCrary, and Kevin Stange for helpful discussions. We also thank Jesse Gregory and Andrew Winerman for providing helpful research support. We have benefited from comments of seminar participants at NBER, the Society of Labor Economics, the Harris School of Public Policy, the Brookings Institution, Stanford University, and the University of Michigan. We would like to acknowledge funding during various stages of this project from the National Science Foundation [SES 1320-0351575], National Institute of Child Health and Human Development [T32 HD007339], the Andrew W. Mellon Foundation, and the Searle Freedom Trust. Much of this work was completed while Bound was a fellow at the Center for Advanced Study in the Behavioral Sciences and Lovenheim was a fellow at the Stanford Institute for Economic Policy Research.

Footnotes

To comment on this article in the online discussion forum, or to view additional materials, visit the articles page at http://www.aeaweb.org/articles.php?doi=10.1257/app.2.3.129.

1Note that trends in college completion rates are inherently difficult to calculate. First, prior to the 1990s, most national surveys asked about educational attainment in terms of completed years of schooling, where completing four years of college is only an imperfect proxy for BA degree attainment. Second, distinguishing immigrants from those born in the United States is important in time series analysis as the number of immigrants with at least a college degree has increased dramatically in the last 15 years.

2Cohort high school graduation is June 1972 for NLS72 respondents and June 1992 for NELS:88 respondents. See the Technical Appendix in Web Appendix A for a detailed discussion of the NLS72 and NELS:88 datasets used in this analysis. Restricting the analysis to those who attend college within two years of cohort high school graduation has little effect on the results. In NLS72, 11.1 percent of college attendees delayed entry by more than two years, and in NELS:88, 8.8 percent delayed entry by more than two years. Changes in the timing of first entry thus cannot explain the shifts over time in completion rates between the two surveys.

3Note that there may be some further closure in aggregate college completion rates measured with a longer lag from high school graduation. Bound, Lovenheim, and Turner (2007) show that while there has been a substantial elongation of time to degree that has occurred between these two surveys, much of this change is shifts within the eight-year window of observation; the proportion of eventual college degree recipients receiving their degrees within eight years does not appear to have changed appreciably. Using the 2003 National Survey of College Graduates, which allows us to examine year of degree by high school cohort, we find the share of eventual degree recipients finishing within eight years holds nearly constant between 0.83 and 0.85 for the high school classes of 1960 to 1979. Focusing on more recent cohorts (and, hence, observations with more truncation), we find that in the 1972 high school graduating cohort, 92.3 percent of those finishing within 12 years had finished in 8 years, with a figure of 92.4 percent for the 1988 cohort.

4Tabulations from the October Current Population Survey (CPS) show 81.0 percent of 19-year-olds had a high school degree in 1972 and 81.8 percent had a high school diploma in 1992. James J. Heckman and Paul A. Lafontaine (2010) use a variety of data sources and show relatively stable high school graduation rates over the time period of our analysis as well.

5We employ the rankings assembled by US News and World Report in 2005 to classify schools into one of the five divisions. The top 50 public schools are all public colleges and universities ranked in the top 50 in that year, and the highly selective private schools are composed of the top 65-ranked private universities and the top 50-ranked liberal arts colleges (plus the US Armed Services Academies). These schools are listing in Web Appendix A. Other metrics, such as resources per student or selectivity in undergraduate admissions, give similar results. While these divisions are admittedly somewhat arbitrary, on the whole, they capture the differences across the different types of post-secondary schools. The US News & World Report rankings of institutions are highly correlated over time, and there are few changes across the large groupings we use to categorize schools. Thus, our use of the 2005 rankings when we are studying earlier periods will not affect our results.

6All references to two-year schools and community colleges refer to public institutions only. We exclude private two-year schools as they often are professional schools with little emphasis on eventual BA completion. In the NLS72 cohort, 1.7 percent attended a private two-year school, and 1.1 percent of the NELS:88 sample attended such an institution.

7Note that aspirations for degree completion among community college students who are recent high school graduates are likely to be somewhat higher than measures for the overall population of community college students which includes a high fraction of older and nontraditional students. Our results are broadly consistent with Frankie Santos Laanan (2003), who employs the 1996 Freshman Survey from Cooperative Institutional Research Program (CIRP).

8The math tests refer to the exams administered by the National Center for Education Statistics (NCES) that were given to all students in the longitudinal surveys in their senior year of high school. Because the tests in NLS72 and NELS:88 covered different subject matter, were of different lengths, and were graded on different scales, the scores are not directly comparable across surveys. We therefore assign each respondent a math test percentile, which is his percentile in the entire distribution of test-takers who graduated from high school.

9This test score measure is used as a simple proxy for preparedness. Measures of high school performance such as high school GPA and class rank are other commonly used measures of preparation. We have chosen not to include GPA measures because we are concerned about difficulty in comparing this measure across high schools and the presence of grade inflation over time.

10For example, in NLS72, 40.5 percent of those who enroll in college and 40.8 percent of those receiving a BA within 8 years of cohort high school graduation are missing information on at least 1 of these background characteristics. These percents are 42.5 and 37.5, respectively, in NELS:88. Because the data are not missing completely at random, case-wise deletion of observations with missing variables will bias the unconditional sample means of completion rates.

11Under the assumption that the data are missing conditionally at random, multiple imputation is a general and statistically valid method for dealing with missing data (Rubin 1987, Roderick J. A. Little 1982). The relative merits of various approaches for dealing with missing data have been widely discussed (e.g., Little and Rubin 2002, J. L. Schafer 1997). See the Technical Appendix in Web Appendix A for complete details of the imputation procedure. Because the surveys contain good supplementary predictor variables, such as high school GPA, standardized test scores from earlier survey waves, and parental income reports, we are able to use a great deal of information about each respondent to impute ranges of missing data points.

12While parental education tended to increase over the period of observation, it is well-known that the overall likelihood of growing up in a two-parent family declined. For example, Census Bureau tabulations show the proportion of all children living with two parents falling from 83 percent to 73 percent between 1972 and 1992. While we are able to observe family structure in the NELS:88 survey (and the relationship with collegiate outcomes), this variable is not observed for NLS72. Yet, because changes in family structure measured in the CPS among those enrolling in college are quite modest, we conclude that changes in this variable cannot be a primary determinant of changes in completion rates.

13A number of other researchers have noted the rise in stratification among institutions in recent decades, with particular attention to the divergence between the public sector and the private sector. Thomas J. Kane, Peter R. Orszag, and David L. Gunter (2003) note that the combination of declines in state appropriations and political restrictions on tuition increases led to declines in spending per student at public schools relative to private schools, with the ratio of per student funding dropping from about 70 percent in the mid-1970s to about 58 percent in the mid-1990s. Caroline M. Hoxby (2009) shows that tuition, subsidies, and student quality have stratified dramatically across the 1962 quality spectrum of higher education since that time.

14While the employment of a price index specific to the overall mix of inputs employed by colleges and universities (e.g., Higher Education Price Index) reduces the constant dollar growth in expenditures, it is likely faculty salaries and the cost of laboratory equipment at research universities have outpaced this general index.

15In addition to recent treatments of the option value in educational investments (e.g., Heckman and Salvador Navarro 2007; Heckman, Lance J. Lochner, and Petra E. Todd 2008; Joseph G. Altonji 1993; and Kevin Stange 2008), a long series of papers, including Burton A. Weisbrod (1962), in the economics literature acknowledge the importance of the option component of educational investments.

16Starting with estimates of completion generated from the NLS72 baseline produces parallel results.

17Reweighting estimators have a long history in statistics dating back at least to the work of D. G. Horvitz and D. J. Thompson (1952) and have become increasingly popular in economics (see, for example, John DiNardo, Nicole M. Fortin and Thomas Lemieux 1996; Heckman, Hidehiko Ichimura, and Todd 1997, 1998; and Robert Barsky et al. 2002).

18When we construct counterfactual completion rates for student-faculty ratios, we do not assign a counterfactual value to NELS:88 respondents with missing student-faculty ratios. This methodology allows us to match student-faculty ratio distributions among those with observed student-faculty ratios in both surveys.

19Firpo, Fortin, and Lemieux (2007) make the same point.

20In addition to the variables discussed in Section I, we include an indicator variable for whether the respondent has no information on student-faculty ratios and an interaction between this indicator and math percentiles. In both NLS72 and NELS:88, 10.8 percent have missing student-faculty ratios. These data are missing either because the student’s first listed institution could not be matched with an institution in the HEGIS/IPEDS data or because the matched institution had missing HEGIS/IPEDS data. Rather than impute these ratios with very limited information, we include a dummy variable indicating missing status and allow the effect of missing institutional data to vary with math percentiles.

21Similarly, Audrey Light and Wayne Strayer (2000) find evidence using the National Longitudinal Survey of Youth that both college quality and student ability affect the likelihood of college completion.

22Researchers consistently have found college students starting at two-year schools are less likely to complete the BA than their peers beginning at four-year schools. C. Reynolds (2007) and Darwin W. Miller (2007) use matching estimators to approach this question, while earlier work uses regression techniques to adjust for observable differences between those starting at two and four-year schools (Cecilia Elena Rouse 1995; D. E. Leigh and A. M. Gill 2003; Jonathan Sandy, Arturo Gonzalez, and Michael J. Hilmer 2006).

23Web Appendix B shows decompositions performed by multiplying either the average marginal effects from our logit models or the coefficients from a linear probability model on a given variable by the change in the mean value of the variable across surveys. In the former case, this amounts to the assumption that variable shifts were constant across the population, while in the latter it amounts to the standard Blinder-Oaxaca decomposition. In both cases the results are qualitatively similar to those presented in Tables 6 and and77.

24The fact that the math test percentile distribution changed negligibly in the non-top 50 public sector suggests peer effects cannot explain the decline in completion rates in this sector because the composition of peers has remained relatively constant.

25An alternative explanation is that changes in the demand for college may be reduced among relatively large cohorts if college preparation is also linked to cohort size. Two related concerns surface. First, relatively large cohorts may be distinguished by adverse demographic or economic shocks that have direct effects on collegiate attainment. For example, if big cohorts are distinguished by low parental education or large family size, such “compositional effects” might account for reduced college completion rather than crowding out on the supply side of the market. Secondly, membership in a relatively large birth cohort may dilute educational resources at the elementary and secondary levels, which would also reduce college preparedness. Bound and Turner (2007) present evidence that neither of these effects is likely to account for much of the association between cohort size and college completion rates. In particular, using census data, they found, at best, only a modest association between cohort size and the demographic composition of the college age population. They also found only a very modest effect of cohort size on the college preparedness of high school graduates.

Contributor Information

John Bound, University of Michigan Department of Economics, 611 Tappan Street, 318 Lorch Hall, Ann Arbor, MI 48109-1220 and National Bureau of Economic Research.

Michael F. Lovenheim, Cornell University Department of Policy Analysis and Management, 103 Martha Van Rensselaer Hall, Ithaca, NY 14853.

Sarah Turner, University of Virginia Department of Economics, Monroe Hall, McCormick Road, Charlottesville, VA 22904 and National Bureau of Economic Research.

REFERENCES

  • Altonji Joseph G. The Demand for and Return to Education When Education Outcomes are Uncertain. Journal of Labor Economics. 1993;11(1):48–83.
  • Barsky Robert, Bound John, Charles Kerwin Kofi, Lupton Joseph P. Accounting for the Black-White Wealth Gap: A Nonparametric Approach. Journal of the American Statistical Association. 2002;97(459):663–673.
  • Bound John, Lovenheim Michael, Turner Sarah. Understanding the Decrease in College Completion Rates and the Increased Time to the Baccalaureate Degree. Population Studies Center Research Report. 2007 07-626.
  • Bound John, Turner Sarah. Cohort Crowding: How Resources Affect Collegiate Attainment. Journal of Public Economics. 2007;91(5–6):877–899.
  • DiNardo John, Fortin Nicole M, Lemieux Thomas. Labor Market Institutions and the Distribution of Wages, 1973–1992: A Semi-Parametric Approach. Econometrica. 1996;64(5):1001–1044.
  • Firpo Sergio, Fortin Nicole, Lemieux Thomas. Decomposing Wage Distributions using Recentered Influence Function Regressions. 2007 Unpublished.
  • Goldin Claudia, Katz Lawrence F, Kuziemko Ilyana. The Homecoming of American College Women: The Reversal of the College Gender Gap. Journal of Economic Perspectives. 2006;20(4):133–156.
  • Heckman James J, Ichimura Hidehiko, Todd Petra E. Matching as an Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme. Review of Economic Studies. 1997;64(4):605–654.
  • Heckman James J, Ichimura Hidehiko, Todd Petra E. Matching as an Econometric Evaluation Estimator. Review of Economic Studies. 1998;65(2):261–294.
  • Heckman James J, Navarro Salvador. Dynamic Discrete Choice and Dynamic Treatment Effects. Journal of Econometrics. 2007;136(2):341–396.
  • Heckman James J, Lochner Lance J, Todd Petra E. Earnings Functions and Rates of Return. Journal of Human Capital. 2008;2(1):1–31.
  • Heckman James J, LaFontaine Paul A. The American High School Graduation Rate: Trends and Levels. Review of Economics and Statistics. 2010;92(2):244–262. [PMC free article] [PubMed]
  • Horvitz DG, Thompson DJ. A Generalization of Sampling Without Replacement from a Finite Universe. Journal of the American Statistical Association. 1952;47(260):663–685.
  • Hoxby Caroline M. The Changing Selectivity of American Colleges. Journal of Economic Perspectives. 2009;23(4):95–118.
  • Jaeger David A. Reconciling the Old and New Census Bureau Education Questions: Recommendations for Researchers. Journal of Business and Economic Statistics. 1997;15(3):300–309.
  • Kane Thomas J, Orszag Peter R, Gunter David L. State Fiscal Constraints and Higher Education Spending: The Role of Medicaid and the Business Cycle. Urban-Brookings Tax Policy Center Discussion Paper 11. 2003
  • Laanan Frankie Santos. Degree Aspirations of Two-Year College Students. Community College Journal of Research and Practice. 2003;27(6):495–518.
  • Leigh DE, Gill AM. Do Community Colleges Really Divert Students from Earning Bachelor’s Degrees? Economics of Education Review. 2003;22(1):23–30.
  • Light Audrey, Strayer Wayne. Determinants of College Completion: School Quality or Student Ability? Journal of Human Resources. 2000;35(2):299–332.
  • Little Roderick JA. Models for Nonresponse in Sample Surveys. Journal of the American Statistical Association. 1982;77(378):237–250.
  • Little Roderick JA, Rubin Donald B. Statistical Analysis with Missing Data, Second Edition. New York: Wiley; 2002.
  • Miller Darwin W. Isolating the Causal Impact of Community College Enrollment on Educational Attainment and Labor Market Outcomes in Texas. Stanford Institute for Economic Policy Research Discussion Paper (SIEPR) 2007 06-033.
  • Reynolds C. Academic and Labor Market Effects of Two-year College Attendance: Evidence Using Matching Methods. 2007 Unpublished.
  • Rouse Cecilia Elena. Democratization or Diversion? The Effect of Community Colleges on Educational Attainment. Journal of Business and Economic Statistics. 1995;13(2):217–224.
  • Rubin Donald B. Multiple Imputation for Nonresponse in Surveys. New York: Wiley; 1987.
  • Sandy Jonathan, Gonzalez Arturo, Hilmer Michael J. Alternative Paths to College Completion: The Effect of Attending a 2-Year School on the Probability of Completing a 4-Year Degree. Economics of Education Review. 2006;25(5):463–471.
  • Schafer JL. Analysis of Incomplete Multivariate Data. London: Chapman and Hall; 1997.
  • Snyder Thomas D, Tan Alexandra G, Hoffman Charlene M. Digest of Education Statistics 2005. Washington, DC: U.S. Government Printing Office; 2006. (NCES 2006-030). U.S. Department of Education, National Center for Education Statistics.
  • Stange Kevin. An Empirical Investigation of the Option Value of College Enrollment. 2008 Unpublished.
  • Stange Kevin. Ability Sorting and the Importance of College Quality to Student Achievement: Evidence from Community Colleges. 2009 http://www-personal.umich.edu/~kstange/papers/Stange-CollegeQuality.pdf.
  • U.S. Department of Education, National Center for Education Statistics. Various years. Higher Education General Information Survey (HEGIS), “Degrees and Other Formal Awards Conferred” surveys, and Integrated Postsecondary Education Data System (IPEDS), “Completions” surveys. http://nces.ed.gov/ipeds/datacenter.
  • Weisbrod Burton A. Education and Investment in Human Capital. Journal of Political Economy. 1962;70(5):106–123.
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...