• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Ann Am Acad Pol Soc Sci. Author manuscript; available in PMC Jan 1, 2014.
Published in final edited form as:
Ann Am Acad Pol Soc Sci. Jan 2013; 645(1): 60–87.
doi:  10.1177/0002716212456363
PMCID: PMC3555140

Response Rates in National Panel Surveys1

Response rates in many large cross-sectional surveys in the United States have declined significantly over the past few decades, continuing a pattern that was observed for some major surveys beginning in the middle of the last century (Steeh 1981; Curtin, Presser, Singer 2005). Declines in other countries have also occurred (Smith 1995) Although response rates have declined in most cross-sectional surveys, the wave-to-wave response rate in the national longitudinal survey that we manage, the Panel Study of Income Dynamics (PSID), has shown no sign of declining during its over forty-year history. The first goal of this chapter is to document trends in reinterview response rates in six major national panel surveys in order to determine whether the experience of the PSID is similar to the experience of other longitudinal surveys. The second goal is to describe the strategies used by national longitudinal surveys to minimize attrition. An extensive array of strategies will be described falling into four categories: incentive payments to respondents, communication with respondents between waves, strategies used during the field period, and survey design features.

A major conclusion of this chapter is that reinterview rates for six major national longitudinal surveys that we examine have not experienced widespread declines. In fact, in almost all surveys response rates have either remained stable at high rates or increased. This finding stands in stark contrast to the experience of cross-sectional surveys. Nonetheless, even low levels of nonresponse can lead to significant cumulative attrition over the life of a longitudinal survey; and if attrition is concentrated among certain types of individuals, it can lead to biased parameter estimates. To investigate the importance of this issue, the final goal of the chapter is to assess various parameters estimated using the longest running of these six surveys, the PSID, and compare these parameter estimates with those based on cross-sectional surveys at various points in time during the more-than forty-year history of the PSID.


We draw on the experiences of six major national surveys, three of which study the United States population: the Health and Retirement Study (HRS), the National Longitudinal Survey of Youth, 1979 (NLSY79), and the Panel Study of Income Dynamics (PSID). Of the three remaining surveys, one covers the British population (the British Household Panel Study, or BHPS), one examines the German population (the German Socio-Economic Panel, or GSOEP), and one studies the Australian population (the Household, Income and Labor Dynamics in Australia Survey, HILDA). These surveys represent some of the most widely used longitudinal surveys in the world.

The BHPS began in 1991 and has annually interviewed the same representative sample of individuals and their descendents. It is a face-to-face household-based survey, interviewing every adult member of the household. The baseline survey included 10,264 adults living in 5,538 households in England during 1991. Additional samples of households in Scotland and Wales were added in 1999 and in Northern Ireland in 2001. The response rates we report are restricted to the 1991 sample and thus do not include families who began participating after 1991.

The GSOEP is a household-based study which started in 1984 with interviews of 12,245 adults living in 5,921 households in West Germany. This sample consists of two subsamples: German households, the so-called “A sample,” and foreign households, the so-called “B sample.” An additional sample of 4,453 East Germans in 2,179 households was begun in 1990 after reunification and is called the “C sample.” Other samples were included in later waves, but we restrict our analysis to these three longest-running samples. Adult sample members are interviewed annually face-to-face.

The HRS began in 1992 with a national sample of people 51 to 61 years old and their spouses, with a resulting sample of 12,654 individuals. Several additional birth cohorts have been added to the study over the years. In this chapter we examine the original cohort, called the HRS cohort, as well as the so-called AHEAD cohort (the Asset and Health Dynamics Survey of the Oldest Old), which consisted of 8,222 individuals aged 70 and older and their spouses, who were first interviewed in 1993. The HRS and AHEAD cohorts were initially interviewed face-to-face but were switched to telephone interviews for several years. Beginning in 2006, one half of the sample was interviewed face-to-face and one half was interviewed by telephone in each wave, with respondents alternating between the two modes from wave-to-wave.

HILDA is a household-based study which began in 2001 on a sample of 19,914 persons living in 7,682 households in Australia, and annual face-to-face interviews have been conducted ever since with all adults living in sampled households. The NLSY79 began as a nationally representative sample of 12,686 persons aged 14–22 years old and living in the United States in 1979. This panel was interviewed annually through 1994 and biennially ever since. The interviews were conducted primarily in-person through 2000 (with the exception of 1987 when most interviews were completed by phone), but telephone interviewing has been dominant in waves from 2002 onward.

The PSID began in 1968 with a nationally representative sample of 18,230 individuals living in 4,802 households in the United States. Information on these individuals and their descendents was collected through annual interviews until 1997 and biennial interviews since then. The primary mode of interview was face-to-face from 1968 to 1972 and since then all interviews have been completed over the phone. One person per family unit is interviewed (see McGonagle, Schoeni, Sastry, and Freedman 2012 for more information).

Although four of the surveys (BHPS, GSOEP, HILDA, and PSID) are household surveys with many features in common, the initial waves of the six surveys were completed at very different times over the past five decades. The PSID began in 1968 when response rates in cross-sectional surveys were substantially higher than today. The NLSY was initiated more a decade later in 1979, followed by the GSOEP in 1984 and then in quick succession by the BHPS, the HRS, and then the AHEAD cohort in 1991, 1992, and 1993, respectively. HILDA was initiated in 2001 and so is the shortest-lived of the six. Readers should bear in mind that the last four surveys were initiated during a period when technologies thought to be associated with declining response rates, such as cell phones and caller-ID, became much more common.

We are interested in determining whether there have indeed been recent declines in response rates that parallel those evinced by cross-sectional surveys. We focus on the reinterview response rate, which is defined as the share of cases from the prior wave that were successfully reinterviewed in the subsequent wave. For some surveys the response rate is defined at the individual level while for others it is computed at the family or household level. In addition, the treatment of decedents in the calculation of response rate varies across surveys. These factors are described in the note to Table 4.1.

Table 1
Wave-to-wave reinterview response rates in selected national surveys

We report wave-to-wave reinterview rates regardless of the length of the period between the interviews. As noted above, some interviews were done annually and others biennially. In longitudinal surveys, of course, trends in the reinterview response rate are likely influenced by cumulative rates of nonresponse across waves. Those respondents who are successively reinterviewed across waves may become increasingly selective, representing those cases that are most disposed and willing to participate. If such a selective process is occurring, we would expect reinterview response rates to increase over time.

The reinterview rates reported in Table 4.1 are plotted in Figures 4.1 and 4.2. These data provide little evidence of a widespread decline in reinterview response rates over time in any of the surveys. The wave-to-wave response rate in the BHPS increased steadily during the first five waves, going from 0.860 in 1992 to 0.916 in 1996, as one would expect under the hypothesis of selective attrition. During the subsequent twelve waves, however, the rate remained quite steady between 0.900 and 0.921. For the A and C samples of the GSOEP, the reinterview rate has, if anything, increased since the initial reinterviews (1985 for the A sample and 1991 for the C sample), at least through 2005. The most recent three waves displayed reinterview rates that were one to two percentage points lower than the average over the prior decade. This apparent decline in the likelihood of reinterview merits further monitoring in subsequent waves to determine whether it represents the beginning of a new trend.

Figure 4.1
Trends in reinterview rates: BHPS and GSOEP
Figure 4.2
Trends in reinterview rates: HRS, AHEAD, HILDA, NLSY79, PSID

The GSOEP B sample of foreigners constitutes an exception to the foregoing results. After an initial increase in reinterview rates in the first three waves, the rate declined fairly continuously over the subsequent 22-year period, going from 0.914 in 1987 to 0.870 in 2008, a 0.044 point decline in total. However, this entails an average decline of just 0.002 points per year and likely reflects the influence of selective emigration, given that foreigners are much more likely to leave the country than natives and are thus more difficult to track and interview on a year-to-year basis, although many ultimately reappear in the survey (Constant and Massey 2003).

During the period of annual interviewing from 1979 to 1994, the NLSY79 achieved consistently high reinterview rates of 0.957 to 0.986. The reinterview rate declined after NLSY investigators switched to interviewing to every other year in 1996, falling to a low of 0.918 in 2000. After that date, however, the rate increased to reach a biennial rate of 0.961 in the most recent wave in 2008, which is in the range experienced during the first fifteen years of the survey.

The HILDA, HRS, and AHEAD surveys have likewise all experienced increases in their reinterview response rates since their initiation, with the rate for the most recent wave being the highest in each case. For the PSID, the reinterview rate has been at least 0.947 in every follow-up wave except 1969. In twenty three of the thirty five waves, the reinterview response rate reached or exceeded 0.98 and the rate in the most recent wave was 0.972 (covering the two year period between 2007 and 2009), which is nearly as high as the 0.979 average annual rate experienced during the first 10 reinterview waves (excluding the first wave when the rate was at a historical low).


Unit nonresponse arises from the non-observation of sample members and may occur for many reasons (see Groves et al., 2009 and the foregoing article for a discussion). Various approaches have been used to reduce nonresponse in panel surveys (Watson and Wooden, 2009). Many of these approaches are similar to strategies used in cross-sectional surveys, while others are unique to longitudinal designs. We group strategies under four rubrics. First, we describe various incentive payments provided to respondents. Second, we consider how studies communicate with respondents between waves and describe the rationale and strategies used. Third, we discuss strategies that are used during the field period to influence reinterview rates. Finally, we discuss design features that have been shown to affect response rates.

A list of strategies used by the six surveys is provided in Table 4.2. Specifically, for each of the surveys covered in Table 4.1, we indicate which strategies are employed in an effort to enhance response rates between waves. In 1998 the HRS and AHEAD cohorts were merged, with data on both cohorts being collected in the same year using the same procedures and the same interviewing team. As a result, in Table 4.2 the strategies for these two cohorts are reported under the one category “HRS.” Although strategies used by all of the surveys are presented in Table 4.2, our focus is on approaches used by the study that we direct, the PSID.

Table 2
Strategies that have been used by national panel studies to maintain high response rates

Incentive Payments

In their review of evidence on incentive effects, Laurie and Lynn (2009) concluded that incentives typically raise response rates; and based on new research from the BHPS, they argue that “even small increases in the value of an incentive on a mature panel can bring a significant improvement in response rates” (p. 230). In a recent experimental study using the Swiss Household Panel, Lipps (2010) concluded that higher incentives not only increased cooperation, but saved field work time. It is not surprising, therefore, that all six panel surveys use incentives extensively. The PSID has provided incentive payments to its respondents since 1968. It attempts to pay the incentive as close to the completion of the interview as possible. In the most recent waves, the turnaround time between the completion of an interview and the mailing of a respondent payment (typically a check, but money orders and cash are also possible) was about one week.

The incentive amount in the PSID has grown from $20 in 1999 to $65 in 2009, or to a value of $50.50 when expressed in terms of 1999 CPI-adjusted dollars. Over the same period, the interview length increased from roughly 35 to 75 minutes, so in real terms the incentive payment has risen from 58 cents per minute to 67 cents per minute. In addition to providing an incentive for participating in the interview, incentives of $5 to $15 are provided to PSID families who assist interviewers in locating other sample members. Furthermore, starting in 2005, families who requested it received an additional $10 in compensation for doing the interview on a cell phone and using their own purchased cell phone minutes. Some surveys use so called “end-game” incentives as a last-resort to encourage participation of the most resistant respondents, steadily raising the incentive amount as the deadline approaches. The amount can end up being substantial. In the HRS, for example, the final incentive offered is $100, roughly double the typical incentive payment. Various types of nonmonetary incentives are also provided to respondents in some surveys, including coffee mugs, refrigerator magnets, and other small tokens of appreciation.

Communication with Respondents between Waves

According to Couper and Lepkowski’s (2002) general model of attrition in panel surveys, knowing the whereabouts of sample members is the first step in reducing attrition, and surveys employ a variety of strategies to achieve this goal (Couper and Ofstedal 2009). The PSID, for example, undertakes several steps to keep track of families between waves of data collection. In February of each year during data collection, a newsletter is sent to all families that provides them with research findings from the study, alerts them to the upcoming interview, and emphasizes the importance of their participation. Before this mailing, addresses are updated using the United States Postal Service National Change of Address service to ensure that mailings are sent to the best possible addresses. Mailings that are returned to the PSID because of address changes or “bad addresses” are archived and used to make the next contact with families. In the latest wave, this procedure resulted in the updating of new addresses for 6 percent of the sample.

Midway between the end of one field period and the start of the next, a “contact information update” mailing is sent to all families, which includes a postage paid postcard listing the most recent contact information (address and phone number) that families are asked to verify or update. Families who return the postcard receive $10 in compensation. The overall rate of postcard returns in 2010 was 68.5 percent, with 7.8 percent of returners providing a new updated address, 19.4 percent providing an address fix, and 72.8 percent verifying their contact information. Results from two experiments designed to improve return rates of the postcard mailing found that response rates were increased by 7–10 percentage points when families who did not respond initially are sent a follow-up mailing (McGonagle, Couper, and Schoeni, 2009, 2011), and by 9–13 percentage points if an incentive of $10 or $20 is provided for returning the card compared to providing no incentive (McGonagle, Schoeni, Couper, Mushtaq, 2011)

A respondent website is another way to encourage participation by sharing information of value to the respondents. In addition, some studies allow respondents to update their contact information online. Many studies, including the PSID, regularly undertake supplementary interviews between core interviews. Although these surveys add to the respondent burden, they also provide updated contact information and many respondents say they value the extra interaction with researchers.

Strategies During Fieldwork

Advance Notification Letters

As part of its routine efforts to make an initial contact for the interview, PSID interviewers send out a study notification letter to the last known address of all families to let them know that they will be receiving a phone call from an interviewer affiliated with the Survey Research Center at the University of Michigan. The letter provides a dedicated toll-free number for respondents to call if they have any questions about the interview, or wish to schedule an appointment. In 2007, 6 percent of the families called the toll-free number, with roughly 90 percent asking to make an interview appointment. Other families called with questions about the study, and a small number called to refuse in advance of being contacted by an interviewer.

Use of Informants

If the mailing does not generate a response, PSID interviewers attempt to make contact with families using the last known telephone number, but this often fails to produce a contact, whereupon they turn to informants. At the end of each interview, PSID investigators routinely collect the names, addresses, and phone numbers of up two contact persons who are most likely to know the whereabouts of the members of the sample family. Interviewers attempt to locate families through the contact information they provided in the prior wave if they cannot be found at their last known telephone number. Because many individuals related to the respondent are in the PSID even if they do not live in the same household, interviewers routinely contact these relatives to help find the focal respondent.

Tracking Strategies

Once interviewing begins, additional tracking attempts are undertaken if families are not found at their last known address, or through contact persons. During 2005, approximately 23 percent of PSID families required tracking. Trackers used directory assistance and searched online databases to locate 92 percent of these families, and 83 percent of those families found by tracking ultimately provided interviews, compared with 95 percent among the 77 percent of families who did not need tracking.

Letters to Respondents

Providing a tailored response to address a respondent’s specific reason for not participating has been shown to be effective in gaining cooperation (Groves and McGonagle, 2001). Between 1970 and 2003, PSID staff wrote letters to families that were individually tailored to situations reported by interviewers. Unfortunately, this strategy is no longer feasible owing to increased oversight by the Institutional Review Board, which now requires that all materials sent to respondents as well as any changes, no matter how minor, receive an IRB review. Since 2003, the PSID has relied on a series of letter templates that address a variety of commonly occurring situations: hard to reach; confidentiality; too busy; and sympathy, when an interviewer learns of the death of a family member. In 2009, more than 1,800 such letters were sent to PSID families.

Interviewer Continuity

The empirical evidence in support of having the same interviewer in different waves is mixed (see Watson and Wooden 2009 for a review). Nonetheless, like most of the other six surveys considered here, the PSID attempts to assign interviewers to the same families they interviewed in prior waves to improve continuity in communication between the project and respondents. In some cases, however, we have found that continuity can be detrimental to respondent cooperation. When respondents and interviewers do not to have a productive relationship, respondents are reassigned to different interviewers. In general, interviewers with prior experience, either on the PSID or similar surveys, achieve higher response rates than others. In 2009, all of the 120 PSID interviewers had worked on prior studies at the Survey Research Center (which runs the field operations for PSID), and 49 percent had served as an interviewer on a prior wave of the PSID.

Use of Opinion Leaders

Respondents may feel a greater sense of commitment to the study if people they know and respect, such as family members and other opinion leaders, endorse the study and encourage them to participate. The downside risk of this approach is that some respondents may in fact be negatively affected by the interaction. Moreover, finding an individual who is well-liked and respected by all respondents and who is willing to endorse the study can be challenging. Although the PSID does not use this approach explicitly, the experience of relatives who are also part of the PSID sample can influence response decisions. If one family member has a positive experience during the interview, for example, they often communicate this experience to their siblings, parents, and children, many of whom also participate in the survey, thereby enhancing the response rates of their relatives. Of course, the opposite possibility also exists---family members who have a negative experience may discourage participation of related sample members.

Interviewer Incentives

Although rarely used in the PSID, we have at times provided bonuses to interviewers who work a specified number of hours per week. The goal of this approach is to increase hours worked during weeks that interviewers are working, thereby reducing the number of weeks in the field and saving associated fixed costs.

Providing Information to Respondents

Most studies provide respondents with information that demonstrates the value of the data being collected, such as press releases and even published articles. The HRS, for example, assesses blood pressure and analyzes blood samples and the results of these tests are given back to respondents. Many respondents report that they value this information.

Survey Design Features

Length of Interview

Evidence on how length of the interview affects completion rates and subsequent attrition is surprisingly limited given the scientific value of including additional interview content. The PSID has maintained an annualized reinterview rate of between 95 and 98 percent across virtually all survey waves, and the rate has not changed substantially despite fluctuations in the length of interview from thirty to forty minutes (see Figure 4.3) and despite significant supplemental data collections between waves of the core survey. Over this period, however, the PSID adopted specific policies to offset any potential negative effect that changes in interview length may have had on attrition, including increasing incentive payments.

Figure 4.3
Icentive payment, interview length, and reinterview rate: PSID, 1969–2009

Despite limited empirical evidence (Branden et al, 1995; Zabel, 1998; Hill and Willis, 2001) and the trends observed in Figure 4.3, almost all studies, including the PSID, act to limit the length of the interview because of concerns that an excessive burden will jeopardize future participation. The PSID has kept its biennial interview at roughly seventy five minutes since 2003. Determining the optimal interview length is an area ripe for innovative research. The scientific benefits of adding additional material to surveys is incredibly high, with marginal costs that are typically well below average cost. At the same time, numerous supplemental data collections have been conducted by PSID and HRS, typically during the years that the main interview is not conducted. The PSID's largest such effort so far was the Child Development Supplement, conducted in 1997, 2002–2003, and 2007–2008, which did not yield any apparent reduction in response to the subsequent core interview. However, more systematic analysis of the effects of supplements on response rates in core interviews is warranted.

Frequency of Interviews

Holding constant the total length of interview time, reducing the frequency of interviewing may also enhance participation. Repeated and frequent requests to the respondents may be burdensome, at least for some respondents and one way to reduce respondent burden is by increasing the time between interviews. At the same time, there are clear tradeoffs. In addition to increasing difficulties of respondent recall, the longer the time between interviews the greater the probability that the respondent has moved making it more difficult to contact sample members (Couper and Ofstedal, 2009).

Supplemental Administrative Data

One way to reduce the interview length is to rely on administrative data that contain needed information. Examples of administrative data that have been widely used in the United States include health care records (such as Medicare claims data), earnings records (Social Security earnings data), and cause-of-death files (from vital statistics registries). Use of the latter can eliminate the need to ask surviving relatives about the cause of a loved one’s death. In some cases, however, the information contained in administrative files merely supplements information gathered from respondents, and therefore do not reduce the length of the interview. For example, few if any surveys would try to collect the sort of detailed information on diagnosis and expenditures that is available in health care claims files.

Mixed Modes

Surveys are increasingly turning to alternative or mixed modes of interviewing for a variety of reasons (Dillman, 2009). The PSID began as a face-to-face interview, but today roughly 97 percent of interviews are completed over the phone. Allowing respondents to choose the mode that is most convenient for them should increase response rates. At the same time, the mode of interview can have a substantial influence on responses. For a panel survey whose primary goal is to understand change in outcomes and behaviors, switching modes could substantially harm the scientific value of the study (Dillman, 2009). Nonetheless, the PSID, like all of the surveys considered here, generally offers mixed modes even if one mode is primary. In response to respondent preferences, for example, roughly 3 percent of PSID interviews have been completed face-to-face in recent waves, and some recent supplemental module have also been administered in person. The HRS completes half of its core interviews over the phone and the other half in-person and has also collected data via mail and the Internet.

Interview Enjoyment

Research shows that when the interview covers topics that are of interest to respondents, response rates are higher (Groves et al, 2004). Parents, for example, typically enjoy talking about their children. The PSID initiated a large supplemental project to collect data on children aged 0–12 years in 1997. As the field work progressed, the response rates were good with high rates of completion for almost all modules. Yet there was a concern that such extensive interview lengths would increase reluctance to participate in future core interviews or supplemental data collections. The evidence indicates that these concerns were not realized, however. The PSID reinterview rates after the child supplement was completed in 1997 have been just as high as the rates before this date (see Figure 4.3).

In contrast, when the PSID became a biennial instead of an annual survey, questions were added to the instrument to collect information on income for the period two years prior to the interview date; and this information was gathered at a detailed level, nearly as detailed as the income reported for one year prior to the interview. Anecdotal evidence based on comments by interviewers suggests that respondents strongly disliked these questions, finding it very difficult to recall such detailed information over that time period (Yeung, Stafford, and Andreski, 2008). Because of this feedback from interviewers and the higher rates of item nonresponse for these items, the number of such detailed questions was reduced in 2009.

Dependent Interviewing

Another technique for reducing the respondent burden is to employ dependent interviewing wherein answers from the prior wave are assumed to persist unless the respondent indicates a change in the value, such as a job change or a change in duties at work (Jackle, 2009). A concern with dependent interviewing is acquiescence bias, whereby respondents tend to answer ‘no change’ even when change has occurred (see Mathiowetz and McGonagle, 2000 for a review).

Recontact Effort

When the PSID began in 1968, individuals who did not respond in a given wave were not contacted in subsequent waves. This policy was changed in 1993, and many individuals who had dropped out since 1968 were recontacted and brought back into the sample. Of the 349 families that completed the interview in 2005 but not 2007, 56 percent were successfully interviewed in 2009. The PSID currently stops attempting to interview individuals who have been nonresponse for two consecutive waves. Most other surveys continue to attempt to interview such sample members. The response rates for these groups are quite low. For example, for the 2008 wave of HRS the response rate for individuals who have been nonresponse for two consecutive waves was 22 , and just 15 percent for those who were absent from three consecutive waves. However, recontact efforts can yield substantial benefits in terms of cumulative response rates.

Proxy Reports

Information pertaining to a particular individual is almost always most accurately reported by respondents themselves. But in the absence of self-reported data, proxy reports are commonly permitted. The PSID relies heavily on proxy information, more so than any of the other five surveys. One respondent per family is interviewed, with detailed information collected about both the head and spouse in married couples.


Nonresponse can be a serious threat to the quality of data. As the panel progresses over time, the cumulative loss of original sample increases the risk of bias in the estimates derived from any one wave as well as in the estimates of change. A progressive loss of respondents may thus result in increasingly biased estimates. The key issue in evaluating the effect of panel nonresponse is the extent to which characteristics of those lost to attrition are related to survey outcomes of interest. Numerous studies have examined predictors of attrition, generally funding a lower probability of continued participation among young people, African-Americans, males, renters, urban dwellers, unmarried persons, people with low incomes, and those with fewer social ties and less community attachment (Fitzgerald, Gottschalk, and Moffitt, 1997; Fitzgerald, 2011).

Instead of examining attrition and its correlates, in this section we attempt to estimate the degree of bias in cross-sectional estimates of various parameters using the PSID. Four content areas are examined: income, health, consumption, and wealth. Within each domain, we use the best available national cross-sectional estimates and compare them against estimates from the PSID. Although estimates from cross-sectional surveys are by no means error-free themselves, no better data are available for examining household and individual characteristics in these four domains.

In all analyses, the PSID estimates are derived using core family weight, which adjust for differential selection and response in wave one as well as selective attrition since then. The weights are not post-stratified to cross-sectional surveys. The number of explanatory variables used in the attrition adjustment models is relatively sparse and described in greater detail in documents on the PSID website (http://psidonline.isr.umich.edu/Guide/documents.aspx). Specifically, the covariates include income, age, region, gender, and whether the observation is from the so-called low-income sample. In the PSID analysis, we analyzed the sample of core families, (families directly related to the original sample of 1968 plus the immigrant sample added in 1997). The Latino sample that was interviewed from 1990 to 1995 was not included.

Household Income

We begin with an assessment of household income. Figures 4.4 and 4.5 display the 10th through the 90th percentiles of the distribution of household income, based on the PSID and the March Current Population Survey (CPS). The CPS is the survey used to generate the official poverty rates used by the federal government. Over almost the entire distribution and across almost all years, the PSID shows a higher level of income. This gap is thought to be the result, in part, of more detailed income reporting in the PSID.

Figure 4.4
The 10th–40th Percentiles of PSID Aggregated Family Income and CPS Household Income 1967–2006.
Figure 4.5
The 50th–80th Percentiles of PSID Aggregated Family Income and CPS Household Income 1967–2006

Most importantly for our purposes, the gap between the PSID and CPS remains fairly constant despite accumulating attrition from the PSID, and this result is true for all points in the distribution between, roughly, the 5th and 95th percentiles. It is only at the tails of the distribution that estimates diverge substantially (see Gouskova, Andreski, and Schoeni 2010 for estimates above the 90th and below the 10th percentiles). Two exceptions must be noted, however. First, the PSID estimate for 1992 is unusually high relative to both the CPS for that year and the PSID in 1991 and 1993. This divergence is especially large for the 70th and 80th percentiles. Second, the peak of the boom in the late 1980s was 1989 according to the CPS while in the PSID the peak was one or two years earlier at most percentiles. In general, however, the two data series are remarkably consistent.

Health Status, Behaviors, and Insurance

We compare health data collected from the 2007 PSID with the same year of the National Health Interview Survey (NHIS). Estimates from prior years are reported in Andreski, Schoeni, and McGonagle (2009). The NHIS consists of a nationally representative sample of the civilian non-institutionalized population. The survey collects basic health and demographic information for all household members. More detailed health information is collected for one sample adult aged eighteen or older and one sample child aged 0–17 per family. In the PSID, health data from the core interview are restricted to heads and wives, so we report estimates for the population eighteen and older in the NHIS and all heads and wives in the PSID. We compare thirteen health-related items, including obesity, weight, work limitation, 30-day emotional distress, six specific conditions (stroke, hypertension, diabetes, cancer, myocardial infarction, and asthma), and self-rated general health (excellent, very good, good, fair, and poor). In addition, we assess information on whether the person currently smokes, has ever smoked, and whether they have health insurance (see Levy 2007 for a closer examination of the PSID health insurance data).

Table 4.3 reports the exact wording of questions used in each of the two surveys. The questions are almost identical for most measures, particularly for height, weight, work limitation, 30-day emotional distress, health conditions, and self-rated general health. The NHIS explicitly asks respondents their height and weight without their shoes, so we might expect to see slightly higher values in the PSID. For diagnosed conditions, NHIS includes not only diagnoses by doctors but “other health professionals,” which might also lead to a slightly higher prevalence in the NHIS. The NHIS question on work limitation includes “emotional problems” as a cause, whereas PSID does not, although it does include “nervous condition.” Smoking and health insurance are not as similar in the two surveys. The NHIS has a threshold of 100 cigarettes in order for someone to be considered having ever smoked, while the PSID does not. For current smoking behavior, the PSID simply asks whether they smoke now. The NHIS asks people how often they smoke cigarettes, with one option being “not at all;” we code people with this response as not currently smoking.

Table 3
Comparison of question wording in the 1999 PSID in 1999 NHIS

As shown in Table 4.4, most of the outcomes align fairly closely. According to both surveys, 19 percent of adults smoked in 2007. Health insurance coverage is 86 percent as estimated by the PSID and 84 percent as estimated by the NHIS. Obesity rates are quite similar, and weight aligns closely throughout the entire weight distribution (see Figure 4.6). The largest gap is observed in the prevalence of a health condition that limits the amount or kind of work: 81 percent from the PSID and 89 percent from the NHIS. There are also differences between the two surveys in response to the self-assessed general health status question. To investigate this further, in Table 4.5 we report estimates for the PSID, NHIS, and the HRS. All samples are restricted to the population aged 51–61 years in order to compare identical age groups. In this case, the PSID and HRS align closely, but differences between the PSID and NHIS are evident at the top end of the health distribution---that is, the share reporting excellent health is 17 percent and 16 percent in the PSID and HRS, respectively, but 23 percent in the NHIS.

Figure 4.6
Estimates of average weight by age and sex: NHIS and PSID, 2007
Table 4
Health Status, Health Behaviors, and Health Insurance Coverage (%) in the PSID and NHIS, 2007
Table 5
Self-Rated General Health Status (%) in the PSID, HRS, and NHIS: 51–61 Year Olds

Consumption Expenditures

The PSID expanded its consumption expenditure measures beginning in 1999, and Li et al (2010) have compared estimates based on these data with the best cross-sectional household survey on this topic, the Consumer Expenditures Survey. There are numerous differences in the way these two surveys collect expenditure data. The PSID has a much smaller set of questions devoted to expenditures. In addition, the time period over which expenditures are reported differ. For some categories, the PSID measures expenditures over a typical week, while other items are measured over the past month, year, or even two year period.

Li et al (2009) take into account these differences in their attempt to compare estimates of annual expenditures between these two surveys. We summarize these results in Table 4.6, where the ratio of average expenditures for each of the major categories collected in the PSID in the years 1999, 2001, and 2003 are reported. Altogether, the six categories of spending measured in the PSID during these years---food, housing, transportation, education, child care, and out-of-pocket spending for health care---represent 72 percent of total expenditures as measured in the Consumer Expenditure survey. Total spending on the six items align fairly well in the two surveys, with a ratio of average spending of 0.96 in 1999, 1.02 and 2001, and 1.01 and 2003. At the same time, some of the individual categories do not align as closely.

Table 6
Ratio of average consumption expenditures in PSID to Consumer Expenditures Survey


An examination of estimates of wealth in the PSID compared with the Survey of Consumer Finances show high concordance for all but the top 5 percent of the wealth distribution (Bosworth and Anders 2008), which in turn suggests that attrition over time has not biased the representativeness of the sample with respect to wealth. Moreover, wealth trends based on the PSID closely match macroeconomic data, showing a secular rise in wealth-income ratios (Bosworth and Smart 2009).


Response rates to cross-sectional surveys have declined substantially over the last several decades but we find little or no evidence that reinterview rates in national panel surveys have declined over the same period. On the contrary, reinterview rates in the panel surveys we examined remain quite high, almost always above 90 percent and in most cases above 95 percent. Indeed, reinterview rates have actually increased in many surveys. Although the goal of this chapter was not to determine what accounts for the high response rates in panel surveys, we conducted a detailed review of the strategies used by these six studies, and most of the strategies are used by all six studies. Quantitative evidence supports the effectiveness of many of these strategies, but for other strategies survey managers employ the strategy based on anecdotal evidence from the field, including observations from interviewers and respondents.

In general, estimates of descriptive parameters on income, health, consumption expenditures, and wealth based on the PSID align fairly closely with estimates of the same parameters using the best available cross-sectional surveys. Although there are clearly parameter estimates that differ substantially between surveys, much of this gap is likely attributable to differences in the questions used to elicit information or the design of the surveys themselves. Our general conclusion of comparable parameter estimates is consistent with similar conclusions drawn by Keeter, et al (2000), Curtin, Presser, and Singer (2005), and Markel and Edelman (2002), all of whom find that various parameter estimates are not sensitive to differences in response rates.


1We thank individuals involved with conducting each of the six surveys examined in this chapter for providing important information and review of the material presented here, specifically, Noah Uhrig and Jon Burton (BHPS), Mary Beth Ofstedal and Heidi Guyer (HRS), Joachim Frick and JurgenShupp (GSOEP), Randy Olsen (NLSY79), Mark Wooden (HILDA), and Eva Leissou (PSID).

Contributor Information

Robert F. Schoeni, University of Michigan, Ann Arbor, MI.

Frank Stafford, University of Michigan, Ann Arbor, MI.

Katherine A. McGonagle, University of Michigan, Ann Arbor, MI.

Patricia Andreski, University of Michigan, Ann Arbor, MI.


  • Andreski Patricia, McGonagle Katherine, Schoeni Robert F. An analysis of the quality of the health data in the Panel Study of Income Dynamics. PSID Technical Series Paper #09-02. Survey Research Center, Institute for Social Research, University of Michigan; 2009. [Accessed May 29, 2012]. at http://psidonline.isr.umich.edu/Publications/Papers/tsp/2009-02_Quality_Health_Data_PSID_.pdf.
  • Astrostic BK, Bates Nancy, Burt Geraldine, Silberstein Adriana. Nonresponse in U.S. government household surveys: consistent measures, recent trends, and new insights. Journal of Official Statistics. 2001;17(2):209–226.
  • Brandon Laura, Gritz R Mark, Pergamit Michael. Effect of interview length on attrition in the National Longitudinal Study of Youth. NLS Discussion Paper No. 28. Washington, DC: Bureau of Labor Statistics, US Department of Labor; 1995. [Accessed May 29, 2012]. at http://www.bls.gov/osmr/pdf/nl950030.pdf.
  • Constant Amelie, Massey Douglas S. Self-selection, earnings, and out-migration: A longitudinal study of immigrants to Germany. Journal of Population Economics. 2003;16(4) 630-533.
  • Couper Mick P, Ofstedal Mary Beth. Keeping in contact with mobile sample members. In: Lynn Peter., editor. Methodology of Longitudinal Surveys. Chichester, UK: John Wiley & Sons; 2009.
  • Curtin Richard, Presser Stanley, Singer Elinor. Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly. 2005;69(1):87–98.
  • De Leeuw Esther D, de Heer Wim. Trends in household survey nonresponse: A Longitudinal and international comparison. In: Groves Robert M, Dillman Don A, Eltinge John L, Little Roderick JA., editors. Survey Nonresponse. New York: Wiley; 2002. pp. 41–54.
  • Dillman Don A. Some consequences of survey mode changes in longitudinal surveys. In: Lynn Peter., editor. Methodology of Longitudinal Surveys. Chichester, UK: John Wiley & Sons; 2009.
  • Fitzgerald John M. Attrition in models of intergenerational links in health and economic status in the PSID with extensions to health and to sibling models. The B.E. Journal of Economic Analysis & Policy. 2011;11(3) Article 2. [PMC free article] [PubMed]
  • Gouskova Elean, Andreski Patricia, Schoeni Robert F. Comparing estimates of family income in the Panel Study of Income Dynamics and the March Current Population Survey, 1968–2007. PSID Technical Paper Series #10-01. Survey Research Center, Institute for Social Research, University of Michigan; 2010. [Accessed May 29, 2012]. at http://psidonline.isr.umich.edu/Publications/Papers/tsp/2010-01_comparing_estimates_of_fam.pdf.
  • Groves Rorbert M, Fowler Floyd J, Couper Mick P, Lepkowski James M, Singer Elinor, Tourangeau Roger. Survey Methodology. 2nd Edition. New York: John Wiley & Sons; 2009.
  • Groves Robert M, Presser Stanley, Dipko Sarah. The role of topic interest in survey participation decisions. Public Opinion Quarterly. 2004;68(1):2–31.
  • Groves Robert M, McGonagle Katherine A. A theory-guided training protocol Regarding survey participation. Journal of Official Statistics. 2001;17(2):249–265.
  • Groves Robert M, Couper Mick P. Nonresponse in Household Interview Surveys. New York: John Wiley & Sons; 1998.
  • Hill Daniel H, Willis Robert J. Reducing panel attrition: A search for effective Policy instruments. Journal of Human Resources. 2001;36(1):41–438.
  • Jackle Annette. Dependent interviewing: a framework and application to current research. In: Lynn Peter., editor. Methodology of Longitudinal Surveys. Chichester, UK: John Wiley & Sons; 2009.
  • Keeter Scott, Miller Carolyn, Kohut Andrew, Groves Robert M, Presser Stanley. Consequences of reducing nonresponse in a large national telephone survey. Public Opinion Quarterly. 2000;64(2):125–148. [PubMed]
  • Laurie Heather, Lynn Peter. The use of respondent incentives on longitudinal surveys. In: Lynn Peter., editor. Methodology of Longitudinal Surveys. Chicester, UK: John Wiley & Sons, Ltd.; 2009.
  • Li Geng, Schoeni Robert F, Danziger Sheldon, Charles Kerwin Kofi. New expenditure data in the PSID: Comparison with the CE. Monthly Labor Review. 2010;133(2):29–39.
  • Lipps Oliver. Effects of different incentives on attrition and field work effort and telephone household panel surveys. Survey Research Methods. 2010;4(2):81–90.
  • Mathiowetz Nancy A, McGonagle Katherine A. An assessment of the current state of dependent interviewing in household surveys. Journal of Official Statistics. 2000;16(4):401–418.
  • McGonagle Katherine A, Couper Mick P, Schoeni Robert F. An experimental test of a strategy to maintain contact with families between waves of a panel study: Effects on contact updates and production outcomes. Survey Practice. 2009 Online Journal Accessed 3/14/11 at http://surveypractice.org/2009/06/29/panel-contacts/.
  • McGonagle KA, Couper MP, Schoeni RF. Keeping Track of Panel Members: An Experimental Test of a Between-Wave Contact Strategy. Journal of Official Statistics. 2011;27(2):319–338. PMC3253355. [PMC free article] [PubMed]
  • McGonagle KA, Schoeni RF, Couper MP, Mushtaq M. An Incentive Experiment Designed to Increase Response to a Between-Wave Contact Update Mailing in Two Panel Studies. Survey Practice. 2011 June: http://surveypractice.wordpress.com/2011/06/.
  • McGonagle KA, Schoeni RF, Sastry N, Freedman VA. The Panel Study of Income Dynamics: Overview, Recent Innovations, and Potential for Life Course Research. Longitudinal and Life Course Studies. 2012;3(2):268–284. [PMC free article] [PubMed]
  • Merkle Daniel, Edelman Murray. Nonresponse in exit polls: A comprehensive analysis. In: Groves Robert M, Dillman Don A, Eltinge John L, Little Roderick., editors. Survey Nonresponse. New York: John Wiley & Sons; 2002.
  • Smith Tom W. Trends in nonresponse rates. International Journal of Public Opinion Research. 1995;7(2):157–171.
  • Steeh Charlotte. Trends in nonresponse rates, 1952–1979. Public Opinion Quarterly. 1981;45(1):40–57.
  • Watson Nicole, Wooden Mark. Identifying factors affecting longitudinal survey response. In: Lynn Peter., editor. Methodology of Longitudinal Surveys. Chichester, UK: John Wiley & Sons; 2009.
  • Yeung WJ, Stafford F, Andreski P. Assessing the Quality of Income Data Collected on A Two-Year Periodicity: Experience from the Panel Study of Income Dynamics. Survey Research: Method and Application. 2008;23:34–80.
  • Zabel Jeffrey E. An analysis of attrition in the Panel Study of Income Dynamics and the Survey of Income and Program Participation with an application to a model of labor market behavior. Journal of Human Resources. 1998;33(2):479–506.
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...