U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Substance Abuse and Mental Health Services Administration . National Survey on Drug Use and Health: Summary of Methodological Studies, 1971–2014 [Internet]. Rockville (MD): Substance Abuse and Mental Health Services Administration (US); 2014 Nov.

Cover of National Survey on Drug Use and Health

National Survey on Drug Use and Health: Summary of Methodological Studies, 1971–2014 [Internet].

Show details

2001

Discussion notes: Session 5

CITATION: Camburn, D., & Hughes, A. (2001). Discussion notes: Session 5. In M. L. Cynamon & R. A. Kulka (Eds.), Seventh Conference on Health Survey Research Methods (HHS Publication No. PHS 01-1013, pp. 251–253). Hyattsville, MD: U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Center for Health Statistics.

PURPOSE/OVERVIEW: The importance of data collection at the State and local level is recognized as a requirement for surveillance and monitoring systems. The presentations in the conference session this paper discusses describe Federal surveillance systems that include as a goal providing State and local area estimates.

METHODS: The session this paper discusses addressed issues affecting the systematic collection of State- and local-level data. The four surveys reported on in the session include the Behavioral Risk Factor Surveillance System (BRFSS), the National Household Survey on Drug Abuse (NHSDA), the National Immunization Survey (NIS), and the State and Local Area Integrated Telephone Survey (SLAITS). The topics discussed include (1) variation in data quality; (2) timeliness of data release; (3) balancing national, State, and local needs; (4) within-State estimates; and (5) analyzing and reporting.

RESULTS/CONCLUSIONS: (1) Concerns about variation in data quality centered on variation in State response rates, cultural differences in the level of respondent cooperation, interviewer effects, and house effects. (2) Survey producers are doing all they can to release the appropriate data in a timely manner, to document limitations, and to minimize microdata disclosure risk. (3) Standardized, well-controlled methodologies may be an advantage in some circumstances, but not others. State-level control over content is important, particularly for within-State analysis, although States are interested in comparability across States. (4) One limitation of current State-level surveillance systems is that resource and time constraints restrict the amount of data that can be collected within individual States for smaller geographic domains or for demographic subgroups. (5) An important issue for surveillance systems collecting data and providing estimates that cover a large number of areas is determining appropriate analysis methods and identifying appropriate methods for reporting the data that include area-specific data quality indicators. For example, in NHSDA, direct estimates are provided for the eight largest States while model-based estimates are calculated for the remaining States. Eventually, NHSDA plans to provide direct estimates for all 50 States.

Variance models applicable to the NHSDA

CITATION: Chromy, J., & Myers, L. (2001). Variance models applicable to the NHSDA. In Proceedings of the 2001 Joint Statistical Meetings, American Statistical Association, Survey Research Methods Section, Atlanta, GA [CD-ROM]. Alexandria, VA: American Statistical Association.

PURPOSE/OVERVIEW: When planning or modifying surveys, it is often necessary to project the impact of design changes on the variance of survey estimates. Initial survey planning requires evaluation of the efficiency of alternative sample designs. Efficient designs meet all major requirements at the minimum cost and can only be evaluated in terms of appropriate variance and cost models. This paper focuses on a fairly simple variance model that nevertheless accounts for binomial variation, stratification, intracluster correlation effects, variable cluster sizes, differential sampling of age groups and geographic areas, and residual unequal weighting.

METHODS: The authors examined data from the 1999 National Household Survey on Drug Abuse national sample to develop variance model parameter estimates. They then compared modeled variance estimates with direct estimates of variance based on the application of design-based variance estimation using the SUDAAN® software.

RESULTS/CONCLUSIONS: The modeled relative standard errors provided a realistic approximation to those obtained from design-based estimates. Because they expressed the variance in terms of design parameters, they were useful for evaluating the impact of alternative designs configurations. The simple (unweighted) approach to variance component estimation appeared to provide useful results in spite of ignoring the weights. The impact of unequal weighting was treated as a multiplicative factor. Although the unequal weighting effect can be controlled to some extent by the survey design, the impact of nonresponse and weight adjustment for nonresponse and for calibration against external data can only be controlled in a general way. The unequal weighting effects were not easily subject to any optimization strategy. The authors concluded that the model treatment of variable cluster sizes, particularly for small domains, should be useful in developing variance models for a wide variety of applications.

Estimation for person-pair drug-related characteristics in the presence of pair multiplicities and extreme sampling weights for NHSDA

CITATION: Chromy, J. R., & Singh, A. C. (2001). Estimation for person-pair drug-related characteristics in the presence of pair multiplicities and extreme sampling weights for NHSDA. In Proceedings of the 2001 Joint Statistical Meetings, American Statistical Association, Survey Research Methods Section, Atlanta, GA [CD-ROM]. Alexandria, VA: American Statistical Association.

PURPOSE/OVERVIEW: In the National Household Survey on Drug Abuse (NHSDA), Brewer’s method is adapted for selecting 0, 1, or 2 individuals for the drug survey from the screened dwelling unit (DU) selected at the first phase. Typically, the parameter of interest is at the person level and not the pair level (e.g., at the parent level in the parent-child data). However, pair data are used for estimation because the study variable is measured only through a pair. In estimation, two major problems arise. One is that of multiplicities because for a given domain, several pairs in a household could be associated with the person, and the multiplicities being domain specific make it difficult to produce a single set of calibrated weights. The other is that of extreme weights due to the possibility of small pair selection probabilities depending on the age groups, which might lead to unstable estimates.

METHODS: For the first problem, the authors propose to do the above calibration simultaneously for controls for key domains. For the second problem, they propose a Hajek-type modification, which entails calibration to controls obtained from larger samples from previous phases. Extreme weights are further reduced by a repeat calibration under bound restrictions while continuing to meet controls.

RESULTS/CONCLUSIONS: It is clear that weights based on pairwise probabilities are required for many drug behavior analyses of the NHSDA data. For this purpose, the analyst needs to make some fundamental decisions about defining population parameters when the person has the same relationship (parent of or child of) to more than one person in the household. For the two problems of multiplicities and extreme weights that might arise in pair data analysis, it was shown how the estimator could be adjusted in the presence of multiplicities and how the weights could be calibrated to alleviate the problem of extreme weights.

Substance use survey data collection methodologies and selected papers [Commentary]

CITATION: Colliver, J., & Hughes, A. (2001). Substance use survey data collection methodologies and selected papers [Commentary]. Journal of Drug Issues, 31(3), 717–720.

PURPOSE/OVERVIEW: This paper presents a commentary on the methodological differences between three national surveys that study substance use among adolescents and young adults: the Monitoring the Future (MTF) study, the National Household Survey on Drug Abuse (NHSDA), and the Youth Risk Behavior Survey (YRBS).

METHODS: This paper reviews the current literature discussing the differences in estimates of drug use for the three surveys as a result of differences in documentation, sampling, and survey design.

RESULTS/CONCLUSIONS: The comparative studies sponsored by the Office of Assistant Secretary for Planning and Evaluation provide an excellent resource for understanding the methodological differences in the MTF, the NHSDA, and the YRBS that contribute to the discrepancy in estimates of drug use provided each year.

Coverage, sample design, and weighting in three federal surveys

CITATION: Cowan, C. D. (2001). Coverage, sample design, and weighting in three federal surveys. Journal of Drug Issues, 31(3), 599–614.

PURPOSE/OVERVIEW: This paper compares and contrasts coverage and sampling used in three national surveys on drug use among teenagers: the Monitoring the Future (MTF) study, the National Household Survey on Drug Abuse (NHSDA), and the Youth Risk Behavior Survey (YRBS).

METHODS: This review starts by comparing the national teenage population coverage of each of the three surveys to assess the effects of coverage error. Next, the sample design for each study is compared, as well as changes to the methodology made over the years. Finally, the weighting procedures used are analyzed.

RESULTS/CONCLUSIONS: The author concluded that all three studies were well designed, and it was difficult to make recommendations to any of these surveys for improving the assessment of drug abuse among teenagers because it could affect other key variables or subgroups in the individual surveys that are not being compared. However, the author indicated that one recommendation that may not negatively affect the validity of the studies was an in-depth coverage study to assess both frame coverage and nonresponse.

Mode effects in self-reported mental health data

CITATION: Epstein, J. F., Barker, P. R., & Kroutil, L. A. (2001). Mode effects in self-reported mental health data. Public Opinion Quarterly, 65(4), 529–549.

PURPOSE/OVERVIEW: This article measures the mode effect differences between audio computer-assisted self-interviewing (ACASI) and an interviewer-administered paper-and-pencil interview (I-PAPI) on respondent reports of mental health issues. Four mental health modules on major depressive episode, generalized anxiety disorder, panic attack, and agoraphobia were taken from the World Health Organization’s Composite International Diagnostic Interview (CIDI) Short Form.

METHODS: The ACASI data were collected in a large-scale field experiment on alternative ACASI versions of the National Household Survey on Drug Abuse (NHSDA). The field experiment was conducted from October through December 1997. The comparison group was comprised of a subsample of the Quarter 4 1997 NHSDA. In the field experiment, mental health questions were administered by ACASI to 865 adults. In the comparison group, mental health questions were administered to a sample of 2,126 adults using I-PAPI. Logistic regression models were used to assess the differences in reporting mental health syndromes by mode of administration. Estimates were made overall and for several demographic variables, such as age, race/ethnicity, gender, education level, geographic region, and population density subgroups while controlling for confounding variables and interactions.

RESULTS/CONCLUSIONS: For most measures, the percentages of people reporting a mental health syndrome were higher for ACASI than I-PAPI. Overall differences were significant only for major depressive episode and generalized anxiety disorder. This study suggests that respondents report higher levels of sensitive behavior with ACASI than with I-PAPI likely due to a perception of greater privacy with ACASI.

Impact of computerized screenings on selection probabilities and response rates in the 1999 NHSDA

CITATION: Eyerman, J., Odom, D., Chromy, J., & Gfroerer, J. (2001). Impact of computerized screenings on selection probabilities and response rates in the 1999 NHSDA. In Proceedings of the 2001 Joint Statistical Meetings, American Statistical Association, Survey Research Methods Section, Atlanta, GA [CD-ROM]. Alexandria, VA: American Statistical Association.

PURPOSE/OVERVIEW: The National Household Survey on Drug Abuse (NHSDA) is an ongoing Federal Government survey that tracks substance use in the United States with face-to-face interviews in a national probability sample. In the past, a paper screening instrument was used by field interviewers to identify and select eligible households in the sample frame. The paper screening instrument was replaced with a computerized instrument in 1999. The computerized instrument standardized the screening process and reduced the amount of sampling procedural error in the survey. This paper identifies a type of procedural error that is possible when using paper screening instruments and evaluates its presence in the NHSDA before and after the transition to a computerized screening instrument.

METHODS: The impact of the error is examined against response rates and substance use prevalence estimates for 1999.

RESULTS/CONCLUSIONS: Results suggest that paper screening instruments are vulnerable to sampling procedural error and that the transition to a computerized screening instrument may reduce the amount of the error.

Examining prevalence differences in three national surveys of youth: Impact of consent procedures, mode, and editing rules

CITATION: Fendrich, M., & Johnson, T. P. (2001). Examining prevalence differences in three national surveys of youth: Impact of consent procedures, mode, and editing rules. Journal of Drug Issues, 31(3), 615–642.

PURPOSE/OVERVIEW: This review compares contact methods and mode used in three national surveys on drug use conducted in 1997: the Monitoring the Future (MTF) study, the National Household Survey on Drug Abuse (NHSDA), and the Youth Risk Behavior Survey (YRBS).

METHODS: Differences in information presented during the informed consent process are compared for all three studies to evaluate its impact on prevalence estimates of drug abuse. Next, the mode for the three studies was compared focusing on where the study took place (school vs. home environment), when the study took place, and how it was conducted (self-administered vs. interviewer-administered and paper-and-pencil interviewing [PAPI] vs. computer-assisted interviewing [CAI]). Finally, data-editing procedures for inconsistent responses and missing responses were analyzed for each survey.

RESULTS/CONCLUSIONS: Comparisons of these three surveys suggested that the consent process and mode used in the 1997 NHSDA may have contributed to the lower prevalence estimates compared with the other 1997 studies. The NHSDA consent process required more parental involvement and presented more consent information than did the process used by the other two studies, which may have inhibited respondent reporting. Differences in editing procedures did not appear to account for any differences in prevalence estimates for the three surveys.

Learning from experience: Estimating teen use of alcohol, cigarettes, and marijuana from three survey protocols

CITATION: Fowler, F. J., Jr., & Stringfellow, V. L. (2001). Learning from experience: Estimating teen use of alcohol, cigarettes, and marijuana from three survey protocols. Journal of Drug Issues, 31(3), 643–664.

PURPOSE/OVERVIEW: This paper compares prevalence estimates for drug use among teenagers in the three national surveys funded by the Federal Government: the Monitoring the Future (MTF) study, the National Household Survey on Drug Abuse (NHSDA), and the Youth Risk Behavior Survey (YRBS).

METHODS: Because the three surveys rely on different modes for data collection, comparisons had to be made across similar groups. For that reason, rates at which adolescents in grades 10 and 12 reported drug use were compared. In addition, comparisons were made between males and females and across racial/ethnic groups, such as whites, Hispanics, and blacks. Finally, trends in drug use from 1993 to 1997 were compared across surveys.

RESULTS/CONCLUSIONS: Comparisons across these surveys were hard to draw because any differences in prevalence estimates were due to numerous confounding factors associated with coverage, sampling, mode, questionnaires, and data collection policies. A solution suggested is for each of these studies to set aside a small amount of money that can be used for collaborative studies of comparisons of the methods used.

Substance use survey data collection methodologies and selected papers [Commentary]

CITATION: Gfroerer, J. (2001). Substance use survey data collection methodologies and selected papers [Commentary]. Journal of Drug Issues, 31(3), 721–724.

PURPOSE/OVERVIEW: The Office of the Assistant Secretary for Planning and Evaluation sponsored five comparative papers to assess differences in methodologies used by three national surveys on drug use among adolescents.

METHODS: The papers compared the National Household Survey on Drug Abuse (NHSDA), the Monitoring the Future (MTF) study, and the Youth Risk Behavior Survey (YRBS). Differences in sampling, mode, and other data collection properties were assessed to determine their impact on the differences in prevalence estimates from the three surveys.

RESULTS/CONCLUSIONS: Finding differences in estimates can provide information to inform an understanding of the current estimates. The findings of these papers also will help target specific areas for more methodological research. Although these studies were sponsored to explain differences in estimates between the surveys, it also is important to note several of the consistencies in these surveys, such as in demographic differences and trends over time.

State estimates of substance abuse prevalence: Redesign of the National Household Survey on Drug Abuse (NHSDA)

CITATION: Gfroerer, J., Wright, D., & Barker, P. (2001). State estimates of substance abuse prevalence: Redesign of the National Household Survey on Drug Abuse (NHSDA). In M. L. Cynamon & R. A. Kulka (Eds.), Seventh Conference on Health Survey Research Methods (HHS Publication No. PHS 01-1013, pp. 227–232). Hyattsville, MD: U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Center for Health Statistics.

PURPOSE/OVERVIEW: Starting with the 1999 survey, the National Household Survey on Drug Abuse (NHSDA) sample was expanded and redesigned to improve its capability to estimate substance use prevalence in all 50 States. This paper discusses the new NHSDA design, data generated from the new design, and issues related to its implementation. Also provided are a summary of the old survey design and a discussion of other major changes implemented in 1999, such as the conversion of the survey from paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI).

METHODS: The pre-1999 survey design and the new survey design are compared and discussed in terms of various methodological implications: (1) data collection methodology, (2) questionnaire content, (3) data limitations, (4) sample design, (5) issues in implementing the expanded sample, (6) estimates that the new design will provide, and (7) estimation and analysis issues.

RESULTS/CONCLUSIONS: The findings include the following: (1) Although the basic research methodology has remained unchanged, the conversion from PAPI to CAI incorporated features designed to decrease burden and increase data quality, including electronic screening, range edits, and consistency checks throughout the questionnaire, and inconsistency resolution in audio computer-assisted self-interviewing (ACASI). (2) Although the content of the 1999 CAI questionnaire is similar to the 1998 PAPI questionnaire, the CAI interview length is considerably shorter than the PAPI interview length. (3) Although the methods used in the survey have been shown effective in reducing reporting bias, there is still probably some unknown level of underreporting that occurs. (4) In response to a need for comparable State-level estimates of substance abuse prevalence, in 1999 SAMHSA expanded the survey sample. This was determined to be feasible based on prior experiences with modeling for selected States, as well as a sampling plan for 1999 that would facilitate State-level estimation. (5) The size and distribution of the 1999 sample units across the 50 States posed a challenge for data collection operations. In spite of extensive training, the inexperience of new interviewers hired for the 1999 survey expansion led to a decline in response rates relative to prior NHSDAs. (6) The sample was designed to produce both model-based and sample-based State-level estimates of a variety of substance use measures. (7) Estimation and analysis issues include comparability of NHSDA State estimates, comparisons of NHSDA State-level estimates to other surveys in States, assessing trends within States, and data release and disclosure limitation.

Understanding the differences in youth drug prevalence rates produced by the MTF, NHSDA, and YRBS studies

CITATION: Harrison, L. D. (2001). Understanding the differences in youth drug prevalence rates produced by the MTF, NHSDA, and YRBS studies. Journal of Drug Issues, 31(3), 665–694.

PURPOSE/OVERVIEW: This paper explores methodological differences in three national surveys of drug use in order to explain the discrepancy in prevalence estimates between the three sources.

METHODS: The three surveys examined in this paper are the National Household Survey on Drug Abuse (NHSDA), the Monitoring the Future (MTF) study, and the Youth Risk Behavior Survey (YRBS). This paper explores the validity of the survey estimates provided by these studies by comparing their differences in methodology, such as survey design, anonymity, confidentiality, and question context.

RESULTS/CONCLUSIONS: Despite the purpose of this paper being to explore the differences in estimates in these three surveys, analyses revealed that the estimates might be more similar than originally thought. The confidence intervals for several variables in the surveys overlapped, and trend analyses in drug use for the surveys followed the same pattern. Any differences found between the survey estimates were likely a result of many minor methodological differences between the surveys. No one survey can be shown to be more accurate than another, and in fact using several different sources provides a more informed overall picture of drug use among youths. The author concluded that each survey should continue using its current methodology because although different, each fulfills a particular need. However, more studies could be done to continue learning about the impact of methodological differences in each survey.

Substance use survey data collection methodologies: Introduction

CITATION: Hennessy, K. H., & Ginsberg, C. (Eds.). (2001). Substance use survey data collection methodologies: Introduction. Journal of Drug Issues, 31(3), 595–598.

PURPOSE/OVERVIEW: There are three annual surveys funded by the U.S. Department of Health and Human Services that collect data on the prevalence of substance use and abuse among youths. The results of these studies are used to form programs and influence policies to address the substance use problem among youths.

METHODS: The Office of the Assistant Secretary for Planning and Evaluation was tasked with comparing and contrasting the survey design, sampling, and statistical assessment used in the three surveys: the National Household Survey on Drug Abuse (NHSDA), the Monitoring the Future (MTF) study, and the Youth Risk Behavior Survey (YRBS).

RESULTS/CONCLUSIONS: Although the three surveys produce different prevalence estimates for drug use and abuse among youths, all three have strong designs and have shown similar trends over time. The papers commissioned to compare the methodologies of these three studies should further aid in understanding the differences between these surveys and assist policymakers in tracking the prevalence and trends of drug use.

Substance use survey data collection methodologies [Commentary]

CITATION: Kann, L. (2001). Substance use survey data collection methodologies [Commentary]. Journal of Drug Issues, 31(3), 725–728.

PURPOSE/OVERVIEW: Alcohol and drug use is a leading social and health concern in the United States. It is associated with and contributes to higher mortality rates and crime. Therefore, a national surveillance of alcohol and drug use is an integral public health issue.

METHODS: This paper is a commentary on the five papers sponsored by the Office of the Assistant Secretary for Planning and Evaluation to evaluate the differences in methodology used in the leading surveys on drug use in the United States and to assess the impact on prevalence estimates for drug use.

RESULTS/CONCLUSIONS: The author concluded that the five papers were a solid resource in understanding the different methodologies used in collecting information about drug use. The papers also highlighted further methodological work that could be done to clarify more fully the effects of methodological differences on the prevalence estimates for the three surveys.

Needs for state and local data of national relevance

CITATION: Lepkowski, J. M. (2001). Needs for state and local data of national relevance. In M. L. Cynamon & R. A Kulka (Eds.), Seventh Conference on Health Survey Research Methods (HHS Publication No. PHS 01-1013, pp. 247–250). Hyattsville, MD: U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Center for Health Statistics.

PURPOSE/OVERVIEW: Data from health surveys provide information critical to State and local governments for health policy and resource allocation decisions. The four surveys described in the session that this discussion paper addresses employ various methodological approaches for collecting health data for State and local areas.

METHODS: The four surveys addressed in this discussion paper include: the Behavioral Risk Factor Surveillance System (BRFSS), the National Household Survey on Drug Abuse (NHSDA), the National Immunization Survey (NIS), and the State and Local Area Integrated Telephone Survey (SLAITS).

RESULTS/CONCLUSIONS: The four surveys share important design goals, such as producing valid estimates for the entire country and for States. However, the BRFSS, NIS, and SLAITS all have noncoverage concerns because of the use of telephone sampling methods for the household portion of the population. Also, although nonresponse is present in all four surveys, the potential nonresponse bias is greatest for the three surveys employing telephone sampling methods. The combination of noncoverage error and nonresponse error can be problematic, especially at the State level. The author suggests several ways to address this concern, including compensatory weights for noncoverage and nonresponse, the use of supplemental frames, and researching models to fully address factors that affect nonresponse. The author also notes that improved survey content and the coordination of State- and nationally administered surveys are of great importance to increasing the quality of State and local health surveys.

Development of computer-assisted interviewing procedures for the National Household Survey on Drug Abuse

CITATION: Office of Applied Studies. (2001, March). Development of computer-assisted interviewing procedures for the National Household Survey on Drug Abuse (HHS Publication No. SMA 01-3514, Methodology Series M-3). Rockville, MD: Substance Abuse and Mental Health Services Administration.

PURPOSE/OVERVIEW: In 1999, the National Household Survey on Drug Abuse (NHSDA) sample was expanded and redesigned to permit using a combination of direct and model-based small area estimation (SAE) procedures that allow the Substance Abuse and Mental Health Services Administration (SAMHSA) to produce estimates for all 50 States and the District of Columbia. In addition, computer-assisted data collection procedures were adopted for both screening and interviewing. This report summarizes the research to develop these computer-assisted screening and interviewing procedures.

METHODS: This report covers a variety of NHSDA field experiment topics. To start, Chapter 2 gives a brief history of research on the NHSDA, and Chapter 3 offers further background information, including a literature review and an overview of critical design and operational issues. Chapter 4 focuses on the 1996 feasibility experiment and cognitive laboratory research, while Chapters 5 through 9 delve into the 1997 field experiment. Specifically, Chapter 5 summarizes the design and conduct of the 1997 effort, Chapter 6 compares computer-assisted personal interviewing (CAPI) and audio computer-assisted self-interviewing (ACASI) with paper-and-pencil interviewing (PAPI) for selected outcomes, Chapter 7 describes the effect of ACASI experimental factors on prevalence and data quality, Chapter 8 details the development and testing of an electronic screener, and Chapter 9 describes the operation of the 1997 field experiment. The next two chapters offer insights into the willingness of NHSDA respondents to be interviewed (Chapter 10) and the effect of NHSDA interviewers on data quality (Chapter 11). Chapter 12 is devoted to further refinement of the computer-assisted interviewing (CAI) procedures during the 1998 laboratory and field testing of a tobacco module.

RESULTS/CONCLUSIONS: The main results from the 1997 field experiment were the following: (1) Use of a single gate question to ask about substance use rather than multiple gate questions yielded higher reporting of substance use. (2) The use of consistency checks within the survey instrument yielded somewhat higher reporting of drug use than when such checks were not used. (3) The use of ACASI yielded higher reports of drug use than PAPI. (4) Respondents were less likely to request help in completing the survey in ACASI than in PAPI, particularly among youths (aged 12–17) and adults with less than a high school education. (5) Respondents using ACASI reported higher comfort levels with the interview than those using PAPI. (6) Respondents with fair or poor reading ability found the recorded voice in ACASI more beneficial than those with excellent reading ability. (7) ACASI respondents reported higher levels of privacy than PAPI respondents.

National Household Survey on Drug Abuse: 1999 nonresponse analysis report

CITATION: Office of Applied Studies. (2001). National Household Survey on Drug Abuse: 1999 nonresponse analysis report. Rockville, MD: Substance Abuse and Mental Health Services Administration.

PURPOSE/OVERVIEW: This report addresses the nonresponse patterns obtained in the 1999 National Household Survey on Drug Abuse (NHSDA). This report was motivated by the relatively low response rates in the 1999 NHSDA and by the apparent general trend of declining response rates in field studies. The analyses presented in this report were produced to help provide an explanation for the rates in the 1999 NHSDA and guidance for the management of future projects.

METHODS: The six chapters of this report provide a background for the issues surrounding nonresponse and an analysis of the 1999 NHSDA nonresponse. The first three chapters provide context with reviews of the nonresponse trends in U.S. field studies, the current academic literature, and the NHSDA data collection patterns from 1994 through 1998. Chapter 4 describes the data collection process in 1999 with a detailed discussion of design changes, summary figures and statistics, and a series of logistic regressions. Chapter 5 compares 1998 with 1999 nonresponse patterns. Chapter 6 applies the analysis in the previous chapters to a discussion of a respondent incentive program for future NHSDA work.

RESULTS/CONCLUSIONS: The results of this study are consistent with the conventional wisdom of the professional survey research field and the findings in survey research literature. The nonresponse can be attributed to a set of interviewer influences, respondent influences, design features, and environmental characteristics. The nonresponse followed the demographic patterns observed in other studies, with urban and high crime areas having the worst rates. Finally, efforts taken to improve the response rates were effective. Unfortunately, the tight labor market combined with the large increase in sample size caused these efforts to lag behind the data collection calendar. The authors used the results to generate several suggestions for the management of future projects. First, efforts should be taken to convert reluctant sample elements to completions. This report contains an outline for an incentive program that addresses this issue. Second, because the characteristics of field staff are among the most important correlates of nonresponse, a detailed analysis should be conducted to evaluate the most effective designs for staffing and retention. Finally, actions should be taken to tailor the survey to regional characteristics, such as environmental and respondent characteristics, which are important predictors of response patterns.

Culture and item nonresponse in health surveys

CITATION: Owens, L., Johnson, T. P., & O’Rourke, D. (2001). Culture and item nonresponse in health surveys. In M. L. Cynamon & R. A Kulka (Eds.), Seventh Conference on Health Survey Research Methods (HHS Publication No. PHS 01-1013, pp. 69–74). Hyattsville, MD: U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Center for Health Statistics.

PURPOSE/OVERVIEW: Item nonresponse is one of the available indicators that can be used to identify cultural variations in survey question interpretation and response. This paper analyzes the patterns of item nonresponse across cultural groups in the United States using data from four national health surveys. The authors hypothesized that respondents from minority cultural groups would exhibit higher nonresponse to survey questions than non-Hispanic white respondents.

METHODS: For the analysis, the authors used four national health-related surveys: the 1992 Behavioral Risk Factor Surveillance System (BRFSS), the 1992 National Household Survey on Drug Abuse (NHSDA), the 1991 National Health Interview Drug Use Supplement (NHIS), and the 1990–1991 National Comorbidity Survey (NCS). In each dataset, the authors selected several items that reflected different health domains. From these items, the authors created 10 dichotomous variables that measured whether the source items contained missing data. Each measure was examined using simple cross-tabulations and logistic regression models in which the authors controlled for sociodemographic variables associated with item nonresponse (e.g., age, gender, education, and marital status).

RESULTS/CONCLUSIONS: Although item nonresponse rates of health survey questions appeared to be low, the authors found that item nonresponse may vary systematically across cultural groups. The authors also found higher item nonresponse rates among each of the minority racial and ethnic groups as compared with non-Hispanic white respondents. In addition to these general findings, the authors noted group-specific cultural differences that warrant special attention. First, the largest odds ratio associated with respondent culture, which reflected Hispanic refusals to answer questions related to their social relationships, may be a consequence of a cultural difference: showing unwillingness to report anything other than positive relations with their family and friends. Second, greater reluctance to report substance use by African-American respondents may be explained by this group’s beliefs about selective enforcement of drugs laws against minority groups in the United States. Finally, although the generally low prevalence of item nonresponse in the data analyzed may indicate that differential rates of nonresponse are of insignificant magnitude to seriously bias survey findings, the data do suggest that cultural differences in item nonresponse may be more problematic under certain conditions.

Person-pair sampling weight calibration using the generalized exponential model for the National Household Survey on Drug Abuse

CITATION: Penne, M. A., Chen, P., & Singh, A. C. (2001). Person-pair sampling weight calibration using the generalized exponential model for the National Household Survey on Drug Abuse. In Proceedings of the 2001 Joint Statistical Meetings, American Statistical Association, Social Statistics Section, Atlanta, GA [CD-ROM]. Alexandria, VA: American Statistical Association.

PURPOSE/OVERVIEW: For pair data analysis, sampling weights need to be calibrated such that the problems of extreme weights are addressed. Extreme weights, which give rise to high unequal weighting effect (UWE) and hence instability in estimation, are derived from possible small pairwise selection probabilities and domain-specific multiplicity factors.

METHODS: The recently developed generalized exponential model (GEM) for weight calibration at RTI by Folsom and Singh (2000) allows for multiple calibration controls, as well as separate bounds on the weight adjustment factors for extreme weights identified before calibration. Thus, controls corresponding to the number of pairs in various demographic groups from the first phase of screened dwelling units, and controls for the number of individuals in the domains of interest from the second phase of surveyed pairs and single individuals, for a key set of domains, can all be incorporated simultaneously in calibration, giving rise to a final single set of calibration weights. Numerical examples of GEM calibration of pair data and the resulting estimates for the 1999 National Household Survey on Drug Abuse (NHSDA) are presented.

RESULTS/CONCLUSIONS: Both model groups of region pair samples of the 1999 NHSDA are used to illustrate results obtained by utilizing the GEM methodology to calibrate a final analytic weight. The authors present summary results of before and after each pair adjustment step in the calibration process for the Northeast and South regions and the Midwest and West regions, respectively. Note that “after” results of each preceding adjustment step is synonymous with “before” results of every subsequent step. Sample sizes; the UWE; unweighted, weighted, and outwinsor percentages of extreme values; and the quartile distribution of both the weight component itself and the weight product up through that step also are presented.

How do response problems affect survey measurement of trends in drug use?

CITATION: Pepper, J. V. (2001). How do response problems affect survey measurement of trends in drug use? In S. K. Goldsmith, T. C. Pellmar, A. M. Kleinman, W. E. Bunney, & Commission on Behavioral and Social Sciences and Education (Eds.), Informing America’s policy on illegal drugs: What we don’t know keeps hurting us (pp. 321–348). Washington, DC: National Academy Press.

PURPOSE/OVERVIEW: Two databases are widely used to monitor the prevalence of drug use in the United States. The Monitoring the Future (MTF) study surveys high school students, and the National Household Survey on Drug Abuse (NHSDA) surveys the noninstitutionalized residential population aged 12 or older. Each year, respondents from these surveys are drawn from known populations—students and noninstitutionalized people—according to well-specified probabilistic sampling schemes. Hence, in principle, these data can be used to draw statistical inferences on the fractions of the surveyed populations who use drugs. It is inevitable, however, for questions to be raised about the quality of self-reports of drug use. Two well-known response problems hinder one’s ability to monitor levels and trends: nonresponse, which occurs when some members of the surveyed population do not respond, and inaccurate response, which occurs when some surveyed individuals give incorrect responses to the questions posed. These response problems occur to some degree in almost all surveys. In surveys of illicit activity, however, there is more reason to be concerned that decisions to respond truthfully, if at all, are motivated by respondents’ reluctance to report that they engage in illegal and socially unacceptable behavior. To the extent that nonresponse and inaccurate response are systematic, surveys may yield invalid inferences about illicit drug use in the United States.

METHODS: To illustrate the inferential problems that arise from nonresponse and inaccurate response, the author suggested using the MTF and the NHSDA to draw inferences on the annual prevalence of use for adolescents. Annual prevalence measures indicate use of marijuana, cocaine, inhalants, hallucinogens, heroin, or nonmedical use of psychotherapeutics at least once during the year. Different conclusions about levels and trends might be drawn for other outcome indicators and for other subpopulations.

RESULTS/CONCLUSIONS: The author concluded that the MTF and the NHSDA provide important data for tracking the numbers and characteristics of illegal drug users in the United States. Response problems, however, continued to hinder credible inference. Although nonresponse may have been problematic, the lack of detailed information on the accuracy of response in the two national drug use surveys was especially troubling. Data were not available on the extent of inaccurate reporting or on how inaccurate response changes over time. In the absence of good information on inaccurate reporting over time, inferences on the levels and trends in the fraction of users over time were largely speculative. It might be, as many had suggested, that misreporting rates were stable over time. It also might have been that these rates varied widely from one period to the next. The author indicated that these problems, however, did not imply that the data were uninformative or that the surveys should be discontinued. Rather, researchers using these data must either tolerate a certain degree of ambiguity or must be willing to impose strong assumptions. The author suggested practical solutions to this quandary: If stronger assumptions are not imposed, the way to resolve an indeterminate finding is to collect richer data. Data on the nature of the nonresponse problem (e.g., the prevalence estimate of nonrespondents) and on the nature and extent of inaccurate response in the national surveys might be used to both supplement the existing data and to impose more credible assumptions. Efforts to increase the valid response rate may reduce the potential effects of these problems.

Predictive mean neighborhood imputation with application to the person-pair data of the National Household Survey on Drug Abuse

CITATION: Singh, A., Grau, E., & Folsom, R., Jr. (2001). Predictive mean neighborhood imputation with application to the person-pair data of the National Household Survey on Drug Abuse. In Proceedings of the 2001 Joint Statistical Meetings, American Statistical Association, Survey Research Methods Section, Atlanta, GA [CD-ROM]. Alexandria, VA: American Statistical Association.

PURPOSE/OVERVIEW: The authors present a simple method of imputation termed “predictive mean neighborhood,” or PMN. Features of this method include the following: (1) it allows for several covariates; (2) the relative importance of covariates is determined objectively based on their relationship to the response variable; (3) it incorporates design weights; (4) it can be multivariate in that correlations between several imputed variables are preserved, as well as correlations across imputed and observed variables; (5) it accommodates both discrete and continuous variables for multivariate imputation; and finally (6) it should lend itself to a simple variance estimation adjusted for imputation.

METHODS: The PMN method is a combination of prediction modeling with a random nearest neighbor hot deck. It uses a model-based predictive mean to find a small neighborhood of donors from which a single donor is selected at random. Thus, the residual distribution is estimated nonparametrically from the donors, where the imputed value is (approximately) the predictive mean plus a random residual. Applications of PMN for imputing two types of multiplicity factors required for pair data analysis from the 1999 NHSDA are discussed.

RESULTS/CONCLUSIONS: The PMN methodology has been widely used for the imputation of a variety of variables in the NHSDA, including both continuous and categorical variables with one or more levels. The models were fit using standard modeling procedures in SAS® and SUDAAN®, while SAS macros were used to implement the hot-deck step, including the restrictions on the neighborhoods. Although creating a different neighborhood for each item nonrespondent was computationally intensive, the method was implemented successfully. At the time this paper was presented, the imputations team at RTI was implementing a series of simulations to evaluate the new method, comparing it against the unweighted sequential hot deck used earlier and a simpler model-based method.

Examining substance abuse data collection methodologies

CITATION: Sudman, S. (2001). Examining substance abuse data collection methodologies. Journal of Drug Issues, 31(3), 695–716.

PURPOSE/OVERVIEW: The U.S. Department of Health and Human Services (HHS) sponsors three national surveys on drug use in adolescents. These three studies are the National Household Survey on Drug Abuse (NHSDA), the Youth Risk Behavior Surveillance System (YRBSS), and the Monitoring the Future (MTF) study. The estimates for drug use reported for three surveys differ considerably. This paper examines methodological reasons for the differences reported in these studies.

METHODS: The author does not conduct any new experiments or analyses on the survey data, but examines previous validation and methodological research. The major differences in methodology identified, which might contribute to differences in estimates, are mode of administration (home vs. school setting), questionnaire context and wording, sample design, and weighting.

RESULTS/CONCLUSIONS: Of all the methodological differences, it appears that the context and introduction to the questionnaires in addition to the mode of administration have the largest impact because they affect the respondents’ perceived anonymity. It is likely that NHSDA respondents do not perceive the study to be as anonymous as respondents in both the MTF and YRBSS. More research should be conducted to ascertain how anonymous respondents felt the study was.

Copyright Notice

All material appearing in this report is in the public domain and may be reproduced or copied without permission from SAMHSA. Citation of the source is appreciated. However, this publication may not be reproduced or distributed for a fee without the specific, written authorization of the Office of Communications, SAMHSA, HHS.

Bookshelf ID: NBK519723

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.7M)

In this Page

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...