• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of amjphAmerican Journal of Public Health Web SiteAmerican Public Health Association Web SiteSubmissionsSubscriptionsAbout Us
Am J Public Health. 2001 November; 91(11): 1825–1831.
PMCID: PMC1446886

Relying on Surveys to Understand Abortion Behavior: Some Cautionary Evidence

Abstract

Objectives. The reliability of abortion self-reports has raised questions about the general usefulness of surveys in research about abortion behavior; however, the extent of underreporting remains a subject of some debate. This study sought to examine abortion reporting in a sample of welfare mothers and to determine factors in underreporting.

Methods. In New Jersey, which covers abortions requested by welfare recipients under its Medicaid program, the responses of a randomly drawn sample of 1236 welfare mothers about abortion events were compared with the Medicaid claims records of these women.

Results. Only 29% of actual abortions were self-reported by the women in the sample. This finding varied dramatically by race, with substantially higher rates of underreporting by Blacks than by Whites or Hispanics.

Conclusions. Although race is the most consistent predictor of underreporting behavior, attitudinal factors and survey technology also help in explaining abortion reporting behavior.

Survey research has become a staple for policy analysts over the past several decades and has always served as a critical data source in the study of many issues of interest to sociologists, demographers, and political scientists. The usefulness of surveys in studying highly personal or sensitive individual characteristics, such as income, crime victimization, mental health, or sexual behavior, however, has been questioned many times.1–4 It would be difficult to imagine a more sensitive topic than induced abortion, yet surveys have commonly been used to study abortion behavior.

Accurate reporting of induced abortions is important for 2 reasons. First, accurate reports are crucial to estimating the incidence of unintended pregnancies, which is of immense interest to reproductive health professionals and researchers. Data on unintended pregnancies, in turn, are critical to assessing the failure rate of contraceptives and the need for contraceptive services. Underreporting of abortions introduces a downward bias in the estimation of contraceptive failure rates calculated from survey data5–7 (also E. F. Jones and J. D. Forrest, unpublished report, 1989). Second, an unreported abortion is also an unreported pregnancy. Demographers rely heavily on pregnancy data to estimate fecundity and elucidate the fertility dynamics of populations.8,9

Many studies were undertaken in the 1980s to assess the reliability of survey data on abortions. These studies, however, relied on average or aggregate characteristics culled from external data sources to gauge the quality of survey data. Only 1 previous study has compared self-reported abortion data with external medical records for the same individuals.10

Three major national surveys have contributed information on abortion at an individual level in recent years: (1) the National Survey of Family Growth, (2) the National Surveys of Young Women, and (3) the National Longitudinal Surveys of Work Experience of Youth. The principal sources of external data used by researchers in assessing the reliability of survey estimates of abortions have been the Alan Guttmacher Institute's provider surveys and the Centers for Disease Control and Prevention's compilation of state health department and other abortion data. Findings on underreporting from various studies show that the rate of underreporting ranges from 40% to 65%; that is, only 35% to 60% of actual abortions are reported in surveys.

Previous researchers have identified some factors that influence underreporting of abortions. Udry et al. provide a useful taxonomy that classifies these factors as fertility-related, demographic, or survey implementation factors.10 Hammerslough adds a further dimension, namely, psychologic repression.5

Underreporting of abortions has been found to vary with fertility-related factors, such as the method of contraception used and the number of abortions a woman has had.11,12 Demographic factors that have been identified as most consistently influencing underreporting of abortions are race and marital status of the respondent. Non-White and unmarried women are significantly more likely than others to underreport.5,13,14

Interviewer characteristics, the directness of the abortion question, and the interview method have also been found to be useful predictors of underreporting. London and Williams found that women of all races are more likely to underreport having had an abortion to an interviewer who is not of their own race.15 There is some evidence that abortion questions that are asked indirectly—through prefacing, filtering questions, or randomized response technique—result in a higher rate of abortion reporting.5,16

In the present study, abortions reported by a sample of New Jersey welfare clients surveyed in 1995 were compared with abortion data on the same individuals from Medicaid claims files. Demographic information for the analysis was drawn from New Jersey public welfare administrative records. Data from these 2 sources, while not perfect, are generally considered the most reliable administrative data available because they are directly linked to welfare check issuance or vendor payments. Both data sources are subject to rigorous internal quality control edits and external sampling audits.

It should be pointed out that none of the previous studies that have examined abortion underreporting have included respondents' attitudes toward abortion and, more generally, attitudes toward childbearing experience. In the present study I used a set of 3 scales developed by W. B. Miller (unpublished paper, 1993) that probe these attitudes. Miller's Abortion Attitude Index (AAI) measures the respondent's overall attitude toward abortion and attempts to tap the physical, social, psychologic, and moral aspects of pregnancy termination. The second scale measures the respondent's positive childbearing motivation (PCM), and the third measures negative childbearing motivation (NCM). (The items that constitute the 3 scales are described in an appendix that is available from the author.)

I hypothesized that women who have a less restrictive attitude toward abortion would be more likely to report abortions, regardless of demographic, interviewer, or fertility-related factors, and that the same would be true of women with a more negative (less positive) attitude toward childbearing.

METHODS

Data and Sample

In this study I used data from a survey of welfare clients in New Jersey conducted in 1995 as part of a larger evaluation of New Jersey's welfare program, called the Family Development Program. The evaluation used an experimental design in which 8379 welfare clients were randomly assigned to either the Family Development Program group or a control group. A random sample of 3018 of the 8379 clients were selected to be surveyed; 1236 surveys were completed, for a response rate of just over 40%. The primary reason for nonresponse was the inability to locate women who had moved. The 40% response rate is comparable to rates for other surveys targeted at welfare clients in which limited resources precluded an extensive follow-up effort.17,18

The client survey contained an extensive fertility component that recorded respondents' pregnancy history and their attitudes toward becoming pregnant, carrying a pregnancy to term, and terminating or avoiding a pregnancy. To elicit pregnancy history, the survey used the format employed by the 1988 National Survey of Family Growth—respondents were asked about each pregnancy and its outcome, that is, whether the pregnancy ended in miscarriage, stillbirth, abortion, or live birth. The survey also contained the AAI and the attitudinal questions on childbearing that were adapted from W. B. Miller (unpublished paper, 1993).

The survey was administered by a nationally known professional survey research firm between July 1995 and January 1996. Because all of the survey targets were women, only women interviewers were used. Of the 1236 women interviewed, 1087 were interviewed via computer-assisted telephone interviewing techniques, and 149 were interviewed in their homes via computer-assisted personal interviewing techniques.

Two versions of the survey were used: a short version that typically took 45 to 50 minutes to administer and a long form that required an average of 70 minutes to complete. Although both versions contained the same detailed fertility component, the long form contained more detailed probes on employment history, family and youth functioning, and health care coverage.

Although it is possible that nonrespondents were systematically more or less likely than respondents to underreport their fertilityrelated behavior, this comparison of actual and reported abortion behavior nonetheless offers valuable insight into the usefulness of attempting to gather sensitive information through surveys. The representativeness of this survey is indicated in Table 1 [triangle], which shows that the survey respondents closely mirrored nonrespondents on experimental status, length of welfare receipt, marital status, mean age, and mean number of children eligible for Aid to Families With Dependent Children. There was a high correspondence between the 2 groups in county of residence, and the distribution of number of abortions across the 2 groups was similar.

TABLE 1
Characteristics of Respondents and Nonrespondents to a Survey on Pregnancy, Childbirth, and Abortion: New Jersey Welfare Clients, July 1995–January 1996

It is apparent that Hispanics were underrepresented in the surveyed group, as were women with less than a high school education. Respondents may differ from nonrespondents in unmeasured ways, as well. One of the mechanisms through which these unmeasured differences may surface is clients' work behavior. The close correspondence between respondents and nonrespondents on the key characteristic of employment—a behavior that would be expected to have a high correlation with unmeasured characteristics such as motivation and self-directedness—provides some further evidence for the representativeness of the sample's opinions and attitudes.

Variables

Four sets of variables were used to describe and predict accuracy in abortion reporting: demographic variables, attitudinal variables, survey implementation variables, and fertility-related variables.

The first set of variables contains typical demographic measures such as age, race, marital status, and education. In addition, this set includes a measure of whether the respondent was a long-term welfare recipient or a short-term recipient.

The second set of variables consists of 3 scales, alluded to earlier, that tap respondents' motivations with regard to childbearing and their attitudes about abortion: the PCM scale, the NCM scale, and the AAI. The PCM scale contains 21 items that describe the desirability of getting pregnant and having a child. The respondent is asked how important each of these items was in her decision to get pregnant or to have a baby. The items are coded 1 (important) or 2 (not important). Following Miller,19 responses to all 21 items are added to form a scale score. However, the items are recoded prior to summing so that higher scores indicate a more positive motivation toward childbearing.

The NCM scale contains 19 items that describe the “undesirability” of having a pregnancy or giving birth. These items are coded 1 (important) or 2 (not important). The responses to the 19 items are then summed to form a scale score, with low scores indicating a more negative motivation toward childbearing.

The AAI assesses how respondents feel about terminating a pregnancy. This index contains 13 items, which are coded 1 (an acceptable reason to terminate a pregnancy), 2 (not sure), or 3 (not acceptable). Higher scores on this scale indicate a more restrictive attitude toward abortion.

The third group of variables, survey implementation measures, describes the mode of administration (by telephone or in person) and whether the respondent was given the long or short version of the interview. As in Anderson et al.,12 the measure used to represent fertility-related variables in this study is the number of abortions the respondents had during the study period according to Medicaid data.

The external data used to validate reported fertility events came from 2 administrative data systems. The welfare department's database (New Jersey Department of Human Services, Family Assistance Management Information System, 1992–1996) provides demographic data. The Medicaid claims files (New Jersey Department of Human Services, Division of Medical Assistance, 1992–1996) provide data on abortions.

Analytic Strategy

Abortions as reported in the survey for the period January 1993 through the end of the survey (January 1996) were matched with quarterly Family Assistance Management Information System and Medicaid claims files by means of the respondent's welfare case number and interview date. If the data were matched on a quarterly or yearly basis, a misreported abortion might be categorized as a nonreported event; therefore, data for the entire time period (January 1993 through January 1996) were matched.

Two kinds of analyses were undertaken: descriptive comparisons of reported and actual abortions, and multivariate analysis of the determinants of underreporting of abortions. The dependent variable for the latter analysis was defined only for women who had had at least 1 abortion (according to administrative records) during the study period (n = 224). This variable captures whether or not a client reported accurately at least 1 abortion that she had had during the study period; it was coded 1 if the respondent reported an abortion that was also found in the administrative data system and 0 if she did not. Logistic regression models were used to estimate a respondent's accuracy in reporting abortions.

RESULTS

Descriptive Results

Table 2 [triangle] shows a comparison of reported vs actual abortions for all survey respondents and for different subgroups. Overall, there is a wide difference between the 2 data sources in the number of abortions reported—only 29% of actual abortions were reported by survey respondents. White women reported about 71% of their actual abortions, whereas Black women reported only 24% of their actual abortions. The rate of abortion underreporting for Hispanics, 34%, lies in between that of Whites and Blacks.

TABLE 2
Self-Reported vs Actual Abortions, by Respondents' Characteristics and Survey Implementation Factors: New Jersey Welfare Clients, July 1995–January 1996

Abortion reporting generally increased with age, except for the 2 oldest age groups. Never-married women were the least likely to report abortions. No specific pattern emerged with respect to education, however. Survey implementation factors indicate that in-person interviews and long interviews were less successful in eliciting the truth about abortions than interviews conducted over the telephone and shorter interviews.

Multivariate Results

Table 3 [triangle] presents descriptive statistics on the variables used in the multivariate analysis. As noted, only women of childbearing age who, according to Medicaid claims records, had had at least 1 abortion during the study period were included in this analysis. The racial breakdown of the sample was 77% Black, 15% Hispanic, and 8% White. About 76% of these women had never been married, and their ages ranged from 17 to 49 years. Average education was about 12 years.

TABLE 3
Description of Variables Used in Multivariate Analysis of the Determinants of Underreporting of Abortions: New Jersey Welfare Clients, July 1995–January 1996

Table 4 [triangle] displays the results of logistic regression analyses. Model 1 contains only the demographic variables; attitudinal measures, survey implementation measures, and fertility measures are added incrementally in models 2, 3, and 4. We see from model 1 that the only demographic variables that were good predictors of accuracy in abortion reporting were race and age. Education, marital status, and length of welfare receipt did not appear to be significant predictors of accuracy in abortion reporting.

TABLE 4
Coefficients (SEs) From Logistic Regression of Truthful Abortion Reporters on Demographic, Attitudinal, Survey Implementation, and Fertility-Related Factors: New Jersey Welfare Clients, July 1995–January 1996

Model 2 shows that in addition to demographic factors, women's attitudes toward abortion and childbearing in general had a significant bearing on whether or not they accurately reported abortions. Women with more restrictive attitudes were significantly less likely than other women to tell the truth about having had an abortion. Also, women who held a less positive attitude toward childbearing were more likely to report an abortion.

Model 3 adds the mode of survey administration. Respondents who were interviewed in their homes were significantly less likely than respondents who were interviewed by telephone to admit they had had an abortion.

Model 4, in which the fertility-related measure of actual number of abortions is added, contributes little to model 3 by way of statistical fit. All the significant variables from the earlier models, however, remain significant in the final model, and their effects typically become stronger. Model 4 shows that the odds of accurate reporting for Blacks were 70% lower than those for Whites. A 1-unit increase on the AAI reduced the odds of accurate reporting by 12%, a 1-unit increase on the PCM scale reduced the odds by 6%, and a 1-unit increase on the NCM scale reduced the odds by 8%. In-person interviews reduced accurate reporting by 87%.

An effort was made in these multivariate analyses to test whether abortion attitudes had an impact on underreporting that was moderated by race. This was done by including an interaction term between race and the AAI in models 2, 3, and 4. The interaction term did not reach statistical significance in these models, providing no evidence for the racial dependence of the AAI in this sample.

DISCUSSION

This study attempted to gauge the extent of abortion underreporting in a sample of welfare mothers and sought to explain underreporting behavior through demographic, attitudinal, survey implementation, and fertility-related factors. The comparison of self-reported fertility data with individual-level information from an external data source (Medicaid administrative data files) for this low-income population is one contribution of this study. It is worth noting that this is only the second study in the United States to use an individual-level abortion matching methodology.

In addition, the study examined the role of an expanded set of explanatory variables in influencing reporting of abortions. This expanded set includes attitudinal factors as well as the demographic, survey implementation, and fertility-related factors that have previously been used to explain abortion underreporting.

Finally, a perspective is introduced that could help our understanding of racial differences in underreporting—one that indicates that improvements in survey technology may have only a limited impact in closing the racial gap. The key findings of this study are as follows:

  • Only a small percentage of actual abortions are reported.
  • Race is the largest single demographic factor affecting accuracy of abortion reporting; Black women are the least likely to report having had an abortion.
  • Underreporting of abortions generally decreases with age.
  • Respondents' general disposition toward childbearing exerts a significant influence on whether or not they report an abortion; women whose attitude toward childbearing is more positive are more likely to underreport abortions.
  • Women who have more restrictive attitudes toward abortion are more likely to be underreporters.
  • Longer interviews and interviews conducted in person are less likely to elicit truthful abortion reporting.

How does one account for the enormous difference in underreporting of abortions between White and Black women? There are at least 3 possible explanations: (1) race may interact with abortion attitudes to produce differences in underreporting; (2) there may be differences across racial groups in other attitudinal measures (not included in the analyses) that contribute to differences in underreporting; or (3) there may be differences in the degree of mistrust with which different racial groups regard the survey process.

The first possibility was tested empirically in this study, but the evidence did not support the notion that the relationship between abortion attitudes and underreporting varies by race. Differences in other attitudes or trust levels, however, may provide a partial answer to the racial differences in abortion underreporting. Smith and Seltzer, for example, in their analyses of more than 50 national, state, and local public opinion polls, reported that on a series of questions measuring religiosity, the differences between Blacks and Whites consistently ranged from a “gap” of 10 to 19 percentage points to a “chasm” of more than 40 percentage points, with Blacks viewing religion as more important.20,21 They also showed that the difference between Blacks and Whites in response to statements such as “People cannot be trusted” approached almost 50%, with Blacks much less trusting.

Although Smith and Seltzer did not address the issue of abortion, the case can be made that this behavior—abortion—occurs at the very point where religious guilt and the need for concealment intersect. Perhaps carefully worded questions, more sensitive questionnaire contexts, different survey modes, or racially matched interviewers could help overcome at least the element of mistrust and improve abortion reporting among Blacks.

This study's findings on interview length and mode of survey administration offer some hope in improving reporting. Shorter interviews and interviews that put some psychologic distance between the respondent and the survey administrator appear to induce respondents to be more truthful reporters of abortion. Other studies also show that self-administered questionnaires, with or without computerization, appear to offer the best results in reporting.7,14,15,22–27

The generalizability of the survey data used in this study may, of course, be limited by a low response rate, concerns about sample representativeness, and the population under study. The comparison of survey respondents and nonrespondents on key measured factors indicates that nonresponse bias may be small and that estimates of underreporting may be conservative. The study's findings, moreover, are not generally at variance with those of similar studies that have used nationally representative samples of women at different childbearing ages. For example, the percentage of actual abortions reported that was uncovered in this study for non-White women is comparable to those found in studies by Hammerslough5 and Jones and Forrest,14 both of which used nationally representative data from the National Survey of Family Growth, the National Longitudinal Surveys of Youth, or both.

The most significant difference between the sample used in this study and the samples in other studies cited in this article is in socioeconomic level. The present study was limited to recipients of Medicaid or public assistance, a factor that signals uniformly high levels of poverty. The overall percentage of actual abortions reported in national surveys is somewhat higher than the 29% reported here, ranging between 35% and 59%.7,14 However, when socioeconomic status is controlled for in these national studies, the differences in actual reporting percentages between different samples of low-income women are small.

This study serves to point up the difficulties of relying solely on survey responses to study induced abortions. This problem poses a serious dilemma for researchers attempting to study the pressing issues of pregnancy, birth, and abortion decisions. These findings indicate that survey data require an external source of validation before they can be used to make legitimate inferences or projections about abortions.

Acknowledgments

This work was supported by the New Jersey Department of Human Services (contract A63003) and by the Administration for Children and Families and the Assistant Secretary for Planning and Evaluation, US Department of Health and Human Services.

I thank Michael J. Camasso, Center for Urban Policy Research, Rutgers University, for alerting me to the literature on race differences in opinion survey research, and I am grateful to the following, all at Princeton University, for their comments: Sara McLanahan and James Trussell, Office of Population Research; Anne Case, Department of Economics; and Jennifer Hochschild, Woodrow Wilson School.

Notes

Peer Reviewed

References

1. Edin K, Lein L. Making Ends Meet: How Single Mothers Survive Welfare and Low-Wage Work. New York, NY: Russell Sage Foundation; 1997.
2. Dillman DA. Mail and Telephone Surveys: The Total Design Method. New York, NY: John Wiley & Sons Inc; 1978.
3. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent sexual behavior, drug use, and violence: increased reporting with computer survey technology. Science. 1998;280:867–873. [PubMed]
4. Catania J, Gibson D, Chitwood D, Coates T. Methodological problems in AIDS behavioral research: influences on measurement error and participation bias in studies of sexual behavior. Psychol Bull. 1990;108:339–362. [PubMed]
5. Hammerslough CR. Correcting Survey-Based Contraceptive Failure Rates for Abortion Underreporting [dissertation]. Princeton, NJ: Princeton University; 1986.
6. Grady WR, Hayward MD, Yagi J. Contraceptive failure in the United States: estimates from the 1982 National Survey of Family Growth. Fam Plann Perspect. 1986;18:200–209. [PubMed]
7. Fu H, Darroch JE, Henshaw SK, Kolb E. Measuring the extent of abortion underreporting in the 1995 National Survey of Family Growth. Fam Plann Perspect. 1998;30:128–133, 138. [PubMed]
8. Casterline JB. Collecting data on pregnancy loss: a review of evidence from the World Fertility Survey. Stud Fam Plann. 1989;20:81–95. [PubMed]
9. Foreit KG, Nortman DL. A method for calculating rates of induced abortion. Demography. 1992;29:127–137. [PubMed]
10. Udry RJ, Gaughan M, Schwingl PJ, van den Berg BJ. A medical record linkage analysis of abortion underreporting. Fam Plann Perspect. 1996;28:228–231. [PubMed]
11. Jones EF, Forrest JD. Use of a supplementary survey of abortion patients to correct contraceptive failure rates for underreporting of abortion. In: Measuring the Dynamics of Contraceptive Use: Proceedings of the United Nations Expert Group Meeting. New York, NY: Population Division, Department of International Economic and Social Affairs, United Nations; 1991:139–152.
12. Anderson BA, Katus K, Puur A, Silver BD. The validity of survey responses on abortion: evidence from Estonia. Demography. 1994;31:115–132. [PubMed]
13. Mosher WD. Reproductive impairments in the United States, 1965–1982. Demography. 1985;22:415–430. [PubMed]
14. Jones EF, Forrest JD. Underreporting of abortion in surveys of US women: 1976–1988. Demography. 1992;29:113–126. [PubMed]
15. London KA, Williams LB. A comparison of abortion underreporting in an in-person interview and a self-administered questionnaire. Paper presented at: Annual Meeting of the Population Association of America; May 3–5, 1990; Toronto, Ontario.
16. Huntington D, Mensch B, Toubia N. A new approach to eliciting information about induced abortion. Stud Fam Plann. 1993;24:120–124. [PubMed]
17. Thornton C, Hershey A. REACH Welfare Initiative: Experience of AFDC Recipients Who Leave Welfare With a Job. Princeton, NJ: Mathematica Policy Research Inc; 1990.
18. Urso DC. Public believes research is useful. The Frame: A Quarterly Newsletter for Survey Researchers. June 1996:1–3.
19. Miller WB. Childbearing motivations, desires, and intentions: a theoretical framework. Genet Soc Gen Psychol Monogr. 1993;120:223–258. [PubMed]
20. Smith RC, Seltzer R. Race, Class, and Culture: A Study in Afro-American Mass Opinion. Albany: State University of New York Press; 1992.
21. Smith RC, Seltzer R. Contemporary Controversies and the American Racial Divide: The O. J. Simpson Case and Other Controversies. Lanham, Md: Rowman & Littlefield Publishers Inc; 2000.
22. Mott F. Evaluation of Fertility Data and Preliminary Analytic Results From the 1983 Survey of the National Longitudinal Surveys of Work Experience of Youth. Columbus, Ohio: Center for Human Resources Research; 1985.
23. Mosher WD, Duffer AP Jr. Experiments in survey data collecting: the National Survey of Family Growth pretest. Paper presented at: Annual Meeting of the Population Association of America; May 5–7, 1994; Miami, Fla.
24. Mosher WD, Pratt WF, Duffer AP Jr. CAPI, event histories, and incentives in the NSFG cycle 5 pretest. Paper presented at: Annual Meeting of the American Statistical Association; August 13–18, 1994; Toronto, Ontario.
25. Lessler JT, Weeks MF, O'Reilly JM. Results from the National Survey of Family Growth cycle 5 pretest. Paper presented at: Annual Meeting of the American Statistical Association; August 13–18, 1994; Toronto, Ontario.
26. Tourangeau R, Jobe JB, Pratt WF. Design and results of the Women's Health Study. Paper presented at: Annual Meeting of the American Statistical Association; August 13–18, 1994; Toronto, Ontario.
27. Tourangeau R, Smith TW. Asking sensitive questions: the impact of data collection mode, question format, and question context. Public Opinion Q. 1996;60:275–304.

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...