All material appearing in this report is in the public domain and may be reproduced or copied without permission from SAMHSA. Citation of the source is appreciated. However, this publication may not be reproduced or distributed for a fee without the specific, written authorization of the Office of Communications, SAMHSA, HHS.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Substance Abuse and Mental Health Services Administration . National Survey on Drug Use and Health: Summary of Methodological Studies, 1971–2014 [Internet]. Rockville (MD): Substance Abuse and Mental Health Services Administration (US); 2014 Nov.
National Survey on Drug Use and Health: Summary of Methodological Studies, 1971–2014 [Internet].
Show detailsA test of the item count methodology for estimating cocaine use prevalence
CITATION: Biemer, P. P., Jordan, B. K., Hubbard, M., & Wright, D. (2005). A test of the item count methodology for estimating cocaine use prevalence. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 149–174). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: The Substance Abuse and Mental Health Services Administration (SAMHSA) has long sought ways to improve the accuracy of the prevalence estimates provided by the National Survey on Drug Use and Health (NSDUH). One method of data collection that shows some promise for improving reporting accuracy is the “item count method.” This technique provides respondents with an enhanced perception of anonymity when reporting a sensitive behavior, such as drug use. Past experience with the item count (IC) methodology (e.g., Droitcour et al., 1991) has identified two major problems with this method: task difficulty and the selection of the innocuous items for the list.
METHODS: To test the efficacy of the IC methodology for estimating drug use prevalence, the method was implemented in the 2001 survey. This chapter describes the research conducted in 2000 and 2001 to (1) develop an IC module for past year cocaine use, (2) evaluate and refine the module using cognitive laboratory methods, (3) develop the prevalence estimators of past year cocaine use for the survey design, and (4) make final recommendations on the viability of using the IC method to estimate drug use prevalence. As part of item (4), IC estimates of past year cocaine use based on this implementation are presented, and the validity of the estimates is discussed.
RESULTS/CONCLUSIONS: Considerable effort was directed toward the development, implementation, and analysis of an IC methodology for the estimation of cocaine use prevalence in NSDUH. Several adaptations of existing methods were implemented, offering hope that the refined method would succeed in improving the accuracy of prevalence estimation. Despite these efforts, the IC methodology failed to produce estimates of cocaine use that were even at the level of those obtained by simply asking respondents directly about their cocaine use. Because the direct questioning method is believed to produce underestimates of cocaine, these findings suggest that the IC methodology is even more biased than self-reports.
Model-based estimation of drug use prevalence using item count data
CITATION: Biemer, P., & Brown, G. (2005). Model-based estimation of drug use prevalence using item count data. Journal of Official Statistics, 21(2), 287–308.
PURPOSE/OVERVIEW: The item count (IC) method for estimating the prevalence of sensitive behaviors was applied to the National Survey on Drug Use and Health (NSDUH) to estimate the prevalence of past year cocaine use. In spite of the considerable effort and research to refine and adapt the IC method to this survey, the method failed to produce estimates that were larger than the estimates based on self-reports. There is evidence to indicate that this problem could be a measurement error in the IC responses.
METHODS: A new model-based estimator was proposed to correct the IC estimates for measurement error, and it was found that the new estimator produced less biased prevalence estimates. The model combined the IC data, replicated measurements of the IC items, and used responses to the cocaine use question in order to obtain estimates of the classification error in the observed data. The data were treated as fallible indicators of latent true values. To obtain an identifiable model, traditional latent class analysis assumptions were made.
RESULTS/CONCLUSIONS: The resulting estimates of cocaine use prevalence were approximately 43 percent larger compared with estimates based only on self-report. The estimated underreporting rates were also consistent with those estimated from other studies of drug use underreporting.
Evaluation of follow-up probes to reduce item nonresponse in NSDUH
CITATION: Caspar, R. A., Penne, M. A., & Dean, E. (2005). Evaluation of follow-up probes to reduce item nonresponse in NSDUH. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 121–148). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: Technological advances in survey research since 1990 offer the ability to improve the quality of face-to-face survey data in part by reducing item nonresponse. Assuming the instrument has been programmed correctly, computer-assisted interviewing (CAI) can eliminate missing data caused by confusing skip instructions, hard-to-locate answer spaces, and simple inattention to the task at hand. Yet in and of itself, CAI cannot reduce the item nonresponse created when a respondent chooses to give a “Don’t know” or “Refused” response to a survey question. Previous research (see, e.g., Turner, Lessler, & Gfroerer, 1992) has shown that these types of responses are more common in self-administered questionnaires than in those administered by an interviewer. Most likely, this occurs because in a self-administered interview the interviewer does not see the respondent’s answer and thus cannot probe or follow up on these types of incomplete responses. This chapter introduces a methodology designed to reduce item nonresponse to critical items in the audio computer-assisted self-interviewing (ACASI) portion of the questionnaire used in the National Survey on Drug Use and Health (NSDUH). Respondents providing “Don’t know” or “Refused” responses to items designated as essential to the study’s objectives received tailored follow-up questions designed to simulate interviewer probes.
METHOD: The results are based on an analysis of unweighted data collected for the 2000 survey (n = 71,764). In total, 2,122 respondents (3.0 percent) triggered at least one of the 38-item nonresponse follow-up questions in 2000. Demographic characteristics are provided of those respondents who triggered at least one follow-up item. To determine what other respondent characteristics tend to be associated with triggering follow-up questions, several multivariate models were developed. Logistic regression was used to determine the likelihood of triggering a follow-up (e.g., answering “Don’t know” or “Refused” to a critical lifetime, recency, or frequency drug question). The lifetime follow-up model was run on all cases, while the recency and frequency follow-up models were run only on those subsets of respondents who reported lifetime use (in the case of the recency follow-up item) or who reported use in the past 30 days (in the case of the frequency follow-up). The predictor variables included in these models are described.
RESULTS/CONCLUSIONS: Perhaps the most significant finding from these analyses is that item nonresponse to these critical items was quite low. For the most part, respondents were willing to answer these questions and did not require additional prompting to do so. As a result of the low item nonresponse rates, the data presented must be interpreted with care. The results of these analyses suggest that younger respondents were more likely to trigger the follow-ups and to provide substantive responses to the follow-ups. In addition, the follow-up methodology was more successful in converting respondents who triggered the follow-up through a “Don’t know” response than through a “Refused” response. The methodology also was more successful when combined with a revised question that reduced respondent recall burden, as was done with the 30-day frequency follow-up for “Don’t know.” The largest percentage of follow-up responders provided a substantive response to the 30-day frequency follow-up when the question was simplified by providing response categories in place of the open-ended response field. There also was some evidence to suggest that drug use may be more prevalent among the follow-up responders although small sample sizes precluded a thorough examination of this result.
Taken together, these results suggest the follow-up methodology is a useful strategy for reducing item nonresponse, particularly when the nonresponse is due to “Don’t know” responses. Additional thought should be given to whether improvements can be made to the refusal follow-ups to increase the number of respondents who convert to a substantive response. Focus groups could be useful in identifying other reasons (beyond the fear of disclosure and questions about the importance of the data) that could cause respondents to refuse these critical items. The results of such focus groups could be used to develop more appropriately worded follow-ups that might be more persuasive in convincing respondents that they should provide substantive responses.
Association between interviewer experience and substance use prevalence rates in NSDUH
CITATION: Chromy, J. R., Eyerman, J., Odom, D., McNeeley, M. E., & Hughes, A. (2005). Association between interviewer experience and substance use prevalence rates in NSDUH. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 59–88). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: Analysis of survey data from the National Survey on Drug Use and Health (NSDUH) has shown a relationship between interviewer experience, response rates, and the prevalence of self-reported substance use (Eyerman, Odom, Wu, & Butler, 2002; Hughes, Chromy, Giacoletti, & Odom, 2001, 2002). These analyses have shown a significant and positive relationship between the amount of prior experience an interviewer has with collecting NSDUH data and the response rates that the interviewer produces with his or her workload. The analyses also have shown a significant and negative relationship between the amount of prior experience of an interviewer and the prevalence of substance use reported in cases completed by that interviewer. This chapter describes the methodology employed to explain these effects within a unified theoretical framework.
METHODS: The prior analyses mentioned above examined interviewer response rates and prevalence independently. This has made it difficult to determine whether the lower prevalence estimates for experienced interviewers are a result of the change in the sample composition due to higher response rates or whether the lower prevalence estimates are a result of a direct effect of interviewer behavior on respondent self-reporting. This study combines these two explanations to produce a conceptual model that summarizes the expectations for the relationship between interviewer experience and prevalence estimates. The combined explanation from the conceptual model is evaluated in a series of conditional models to examine the indirect effect of response rates and the direct effect of interviewer experience on prevalence estimates.
RESULTS/CONCLUSIONS: The analysis shows that increased interviewer experience simultaneously increases response rates and decreases prevalence estimates. In addition, the effect of increased interviewer experience on prevalence cannot be fully explained by weight adjustments based on earlier models (i.e., screening and interview level). In other words, the interviewer effect on prevalence cannot be fully attributed to the increase in response rates by experienced interviewers. Furthermore, interviewer experience was significant in the final model, showing that the covariates also did not account for all the decrease in prevalence. A statistical analysis of marginal and incremental prevalence estimates based on three levels of interviewer experience showed that plausible explanations for the decrease in prevalence for experienced interviewers include (1) lower substance use reporting by the additional respondents and (2) lower reporting of substance use by respondents that interviewers with all levels of experience interview.
Comparing NSDUH income data with income data in other datasets
CITATION: Cowell, A. J., & Mamo, D. (2005). Comparing NSDUH income data with income data in other datasets. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 175–188). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: Personal income and family or household income are among the many demographic measures obtained in the National Survey on Drug Use and Health (NSDUH). Because income is a correlate of substance use and other behaviors, it is important to evaluate the accuracy of the income measure in NSDUH. One metric of accuracy is to compare the estimated distribution of income based on NSDUH with the distributions from other data sources that are used frequently. This chapter compares the distribution of 1999 personal income data from the 2000 NSDUH with the distributions in the same year from the Current Population Survey (CPS) and the statistics of income (SOI) data.
METHODS: The income measure used from the two surveys (NSDUH and the CPS) is personal income, rather than family or household income. Although the CPS explicitly gathers information about more sources of personal income than NSDUH, the income sources in the CPS were combined to map exactly to NSDUH. The income measure from the SOI is adjusted gross income (AGI), reported by various combinations of filing status. Two sets of comparisons were made that differed by marital status in the surveys and filing status in the SOI. First, the income distribution reported in the surveys, regardless of marital status, was compared with that in the SOI, regardless of filing status. For those tax-filing units whose status was “married filing jointly,” the reported AGI was for two people, whereas the survey data used were for individual income only. Consequently, these comparisons were unlikely to provide a close fit of the income distributions, particularly in higher income intervals. Second, the income distribution in the surveys for those who were unmarried (excluding widow[er]s) was compared with the income distribution in the SOI for those whose filing status was single. Because only unmarried people can file as “single,” the restrictions in this second set of comparisons should have ensured a relatively close fit between the survey data and the SOI. The data did not allow a reasonable comparison to be made between the income distribution in NSDUH and the distribution for other filing statuses in the SOI, such as “married, filing jointly.” Because pair-level weights were not available for NSDUH at the time of this analysis, a reporting unit in NSDUH could not be created so that it could compare reasonably with “married filing jointly” in the SOI. NSDUH weights are calibrated to represent individuals. For NSDUH data to represent a pair of married people, rather than individuals, a different set of weights—pair-level weights—are required.
RESULTS/CONCLUSIONS: Despite some fundamental differences between the SOI and either of the survey datasets (CPS and NSDUH), there were strong similarities between the three income distributions. In both sets of comparisons, the frequencies reported in NSDUH and the CPS were typically within 2 percentage points of each other across all income intervals. With the exception of the lowest interval, in the second set of comparisons (single people and single filers), the frequencies of the three datasets were within 2.5 percentage points of one another across all income intervals.
Incidence and impact of controlled access situations on nonresponse
CITATION: Cunningham, D., Flicker, L., Murphy, J., Aldworth, J., Myers, S., & Kennet, J. (2005). Incidence and impact of controlled access situations on nonresponse. In JSM Proceedings, 63rd Annual Conference of the American Association for Public Opinion Research, American Statistical Association (pp. 3841–3843). Alexandria, VA: American Statistical Association.
PURPOSE/OVERVIEW: Failure to collect data from dwelling units with controlled access (i.e., situations where an obstacle keeps an interviewer from reaching respondents) may introduce bias through systematic underrepresentation of certain subgroups. For example, high-income or urban households are frequently found in controlled access situations compared with other subgroups in the United States. This paper summarizes the incidence of controlled access by dwelling unit type and State for all 169,535 of the 2004 National Survey on Drug Use and Health (NSDUH) sample dwelling units and introduces a model that predicts the effects of controlled access barriers on unit and item nonresponse.
METHODS: The authors cross-tabulated controlled access and housing characteristics data to describe the 2004 NSDUH sample. Then they developed regression models to predict unit and item nonresponse, with the expectation that controlled access and housing units other than single family homes contribute to nonresponse.
RESULTS/CONCLUSIONS: The authors found that the rate of controlled access was comparable with that reported in previous studies. As predicted, housing units with some form of controlled access were less likely to be successfully screened or interviewed. In addition to discussing these findings, the authors presented ideas for further investigation of the role that controlled access barriers may have on nonresponse error and data quality.
The differential impact of incentives on cooperation and data collection costs: Results from the 2001 National Household Survey on Drug Abuse incentive experiment
CITATION: Eyerman, J., Bowman, K., Butler, D., & Wright, D. (2005). The differential impact of incentives on cooperation and data collection costs: Results from the 2001 National Household Survey on Drug Abuse incentive experiment. Journal for Social and Economic Measurement, 30(2–3), 157–169.
PURPOSE/OVERVIEW: Research has shown that cash incentives can increase cooperation rates and response rates in surveys. The purpose of this paper is to determine whether the impact of incentives in cooperation is consistent across subgroups.
METHODS: An experiment on differing levels of incentives was conducted during the 2001 National Household Survey on Drug Abuse (NHSDA). Respondents were randomly assigned to receive either a $40 incentive, a $20 incentive, or no incentive. The results of the experiment were assessed by examining descriptive tables and analyzing logistic regression models.
RESULTS/CONCLUSIONS: Overall, respondents in the incentive group had higher cooperation rates while lowered data collection costs. The increased response rate did not significantly change the population estimates for drug abuse. The results of the logit model revealed different levels of cooperation for different demographic subgroups. However, the incentives neither enhanced nor reduced the difference in levels of cooperation across subgroups. The results indicate that it was beneficial for the survey to use incentives to encourage cooperation.
Processing of race and ethnicity in the 2004 National Survey on Drug Use and Health
CITATION: Grau, E. A., Martin, P., Frechtel, P., Snodgrass, J., & Caspar, R. (2005). Processing of race and ethnicity in the 2004 National Survey on Drug Use and Health. In Proceedings of the 2005 Joint Statistical Meetings, American Statistical Association, Section on Survey Research Methods, Minneapolis, MN (pp. 3076–3083). Alexandria, VA: American Statistical Association.
PURPOSE/OVERVIEW: Since the inception of NSDUH, the race and ethnicity of each respondent have been included. They are used as part of the demographic breakdowns in the analyses and the various reports generated from the survey. From 1971 to 1998, the race and ethnicity questions underwent few changes. However, along with the switch from paper-and-pencil interviewing (PAPI) methods of questionnaire administration to computer-assisted interviewing (CAI) methods in 1999, the race and ethnicity categories were updated pursuant to new Office of Management and Budget (OMB) directives. This paper details how race and ethnicity data were recorded in NSDUH since the 1999 CAI and summarizes how these data were processed.
METHODS: N/A.
RESULTS/CONCLUSIONS: N/A.
Development of a Spanish questionnaire for NSDUH
CITATION: Hinsdale, M., Díaz, A., Salinas, C., Snodgrass, J., & Kennet, J. (2005). Development of a Spanish questionnaire for NSDUH. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 89–104). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: Translation of survey questionnaires is becoming standard practice for large-scale data collection efforts as growing numbers of immigrants arrive in the United States. However, the methods used to produce survey translations have not been standardized—even for Spanish, the most common target language (Shin & Bruno, 2003). The translation review of the National Survey on Drug Use and Health (NSDUH) was carried out for a variety of reasons. For many years, the survey has provided a Spanish-language version of the instrument for respondents who requested it. Each year, as new questions were added to the survey, translations were carried out on an ad hoc basis using a variety of translators. In the 1999 survey redesign, a large number of questions were added, and a large number of existing questions were altered to accommodate the audio computer-assisted self-interviewing (ACASI) format. It became apparent through feedback from the field that some of the Spanish questions seemed awkward; consequently, survey staff decided that a comprehensive review would be appropriate. It was determined that a multicultural review of the 2000 survey’s Spanish instrument would be the most effective procedure.
METHODS: This chapter describes the techniques and principles that were applied in a multicultural review of the translation of the NSDUH questionnaire. Common problems that arose in the translation process and “best practices” for their resolution also are discussed. In addition, because increasing numbers of surveys are employing computer-assisted interviewing (CAI), this chapter illustrates some of the ways in which this technology ideally can be put to use in conveying translated materials. Using three translators coming from varied backgrounds in Central and South America, and focus groups of potential respondents representing the major Spanish-speaking groups in the United States, a translation service that specialized in this type of work carried out a review of the entire questionnaire. The specifics of the focus group and multicultural review processes that took place are described in this chapter within the context of a discussion of best practices for the development of Spanish survey translations.
RESULTS/CONCLUSIONS: Several critical steps in the development of accurate Spanish survey translations were identified by the authors. A seemingly obvious first step involves staffing the project with qualified personnel—from translators to interviewers. To optimize respondent comprehension, translations should be developed using a multicultural approach and should be tested and reviewed thoroughly by a diverse group of bilingual individuals (Schoua-Glusberg, 2000). Understanding and applying the concept of cognitive equivalence versus direct translation are key in the development of an effective survey translation. Just as in questionnaire development of English-language surveys, cognitive testing should be employed to identify and correct potential flaws in wording. For studies such as NSDUH that use ACASI, a professional Spanish-speaking voice and skilled audio technicians are needed to ensure the high quality of the audio recording, which maximizes respondents’ comprehension. Bilingual interviewers should be fluent and literate in both Spanish and English, and these skills must be demonstrated using a standardized certification procedure. Finally, allowing sufficient time to implement the Spanish translation and train the interviewing staff is perhaps the most problematic step of all because data collection schedules are typically rigorous and researchers are often challenged to maintain the critical timeline even without translations.
Results of the variance component analysis of sample allocation by age in the National Survey on Drug Use and Health
CITATION: Hunter, S. R., Bowman, K. R., & Chromy, J. R. (2005). Results of the variance component analysis of sample allocation by age in the National Survey on Drug Use and Health. In Proceedings of the 2005 Joint Statistical Meetings, American Statistical Association, Section on Survey Research Methods, Minneapolis, MN (pp. 3132–3136). Alexandria, VA: American Statistical Association.
PURPOSE/OVERVIEW: Since 1999, person sampling rates for the National Survey on Drug Use and Health (NSDUH) have been set by State for five age groups: 12 to 17, 18 to 25, 26 to 34, 35 to 49, and 50 or older. The sample design requires equal sample sizes of 22,500 individuals for each of three age groups: 12 to 17, 18 to 25, and 26 or older. The sample allocation of 22,500 adults to the three 26 or older age groups was set in 1999, then adjusted for the 2001 sample. Using parametric variance modeling, the sampling statistician can represent the variance of key estimates as a function of sample design parameters. This paper examines some alternative sample allocations to age groups based on an update of variance model parameters for nine key NSDUH estimates.
METHODS: Data from the 2003 NSDUH were used to estimate the parameters needed for these models. Because many aspects of the sample design were fixed by the 5-year coordinated design (e.g., number of sampling units and total sample size by State), the models focused on two goals: (1) to predict the expected variance for the 2005 study for selected measures, and (2) to review the allocation of the 26 or older sample among those aged 26 to 35, 36 to 49, and 50 or older.
RESULTS/CONCLUSIONS: The results showed a larger percentage of the population in the older age groups and increasing use and dependence in these age groups.
Forward telescoping bias in reported age of onset: An example from cigarette smoking
CITATION: Johnson, E. O., & Schultz, L. (2005). Forward telescoping bias in reported age of onset: An example from cigarette smoking. International Journal of Methods in Psychiatric Research, 14(3), 119–129. [PMC free article: PMC6878269] [PubMed: 16389888]
PURPOSE/OVERVIEW: Age at the onset of a disorder is a critical characteristic that may predict the increased risk of a severe course and genetic liability. However, retrospectively reported onset in surveys is subject to measurement error. This article investigates forward telescoping, a bias in which respondents report events closer to the time of interview than is true. Past research suggests that forward telescoping influences reported age at onset of first substance use, but it does not answer other questions, such as “Is there a difference in the influence of forward telescoping on age of initiation between experimental users (those who have ever used drugs or smoke on a regular basis) and those who use drugs regularly?” or “Does forward telescoping affect reported age at onset for more advanced stages of substance use?” Thus, the purpose of this paper is to examine the effect of this bias on age of onset for smoking initiation and daily smoking.
METHODS: To estimate the effect of age at interview independent of birth year cohort based on multiple cross-sectional surveys of the same population, the authors selected respondents born between 1966 and 1977 (n = 82,122) from the 1997–2000 National Household Surveys on Drug Abuse (NHSDA). Logistic regression was used to estimate the magnitude of forward telescoping in reported age when the first cigarette was smoked to test whether forward telescoping was greater for experimental smokers than for regular smokers and to assess whether the magnitude of forward telescoping in reported age of first daily smoking was lower than that of initiation.
RESULTS/CONCLUSIONS: The authors found an association between age at onset and age at interview, within birth year, for experimenters and for daily smokers. In addition, as age at interview increased from 12 to 25, the authors found that the probability of reporting early onset decreased by half. Contrary to the hypothesis, forward telescoping of age at initiation appeared to affect equally both experimental smokers and daily smokers. However, it also was found that the degree of forward telescoping of age at initiation of smoking differed significantly by gender and that significant forward telescoping of age at onset of daily smoking occurred differentially by race/ethnicity. Overall, the results of the analysis suggest that forward telescoping is a nonignorable bias that can possibly be mitigated through attention to components of the survey design process, such as question and sample design.
Evaluating and improving methods used in the National Survey on Drug Use and Health
CITATION: Kennet, J., & Gfroerer, J. (Eds.). (2005). Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: The National Survey on Drug Use and Health (NSDUH) is the leading source of information on the prevalence and incidence of the use of alcohol, tobacco, and illicit drugs in the United States. It is currently administered to approximately 67,500 individuals annually, selected from the civilian, noninstitutionalized population of the United States, including Alaska and Hawaii. A survey of this size and importance is compelled to utilize the latest and best methodology over all facets of its operations. Given the sample size and the careful sampling procedures employed, NSDUH provides fertile soil for the testing and evaluation of new methodologies. Evaluation of NSDUH methodologies has been and continues to be an integral component of the project. This includes not only reviewing survey research literature and consulting with leading experts in the field, but also conducting specific methodological studies tailored to the particular issues and problems faced by this survey (Gfroerer, Eyerman, & Chromy, 2002; Turner, Lessler, & Gfroerer, 1992).
METHODS: This volume provides an assortment of chapters covering some of the recent methodological research and development in NSDUH, changes in data collection methods and instrument design, as well as advances in analytic techniques. As such, it is intended for readers interested in particular aspects of survey methodology and is a must-read for those with interests in analyzing data collected in recent years by NSDUH.
RESULTS/CONCLUSIONS: This volume contains a collection of some recent methodological research carried out under the auspices of the NSDUH project. Publishing these studies periodically provides a resource for survey researchers wishing to catch up on the latest developments from this unique survey. Readers from a variety of backgrounds and perspectives will find these chapters interesting and informative and, it is hoped, useful in their own careers in survey methodological research, drug abuse prevention and treatment, and other areas.
Introduction. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health
CITATION: Kennet, J., & Gfroerer, J. (2005). Introduction. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 1–6). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: The National Survey on Drug Use and Health (NSDUH) is the leading source of information on the prevalence and incidence of the use of alcohol, tobacco, and illicit drugs in the United States. It is currently administered to approximately 67,500 individuals annually, selected from the civilian, noninstitutionalized population of the United States, including Alaska and Hawaii. A survey of this size and importance is compelled to utilize the latest and best methodology over all facets of its operations. Given the sample size and the careful sampling procedures employed, NSDUH provides fertile soil for the testing and evaluation of new methodologies. Evaluation of NSDUH methodologies has been and continues to be an integral component of the project. This includes not only reviewing survey research literature and consulting with leading experts in the field, but also conducting specific methodological studies tailored to the particular issues and problems faced by this survey (Gfroerer, Eyerman, & Chromy, 2002; Turner, Lessler, & Gfroerer, 1992). This volume provides an assortment of chapters covering some of the recent methodological research and development in NSDUH, changes in data collection methods and instrument design, as well as advances in analytic techniques.
METHODS: This introduction begins with a brief history and description of NSDUH. A more detailed account can be found in Gfroerer et al. (2002). Prior methodological research on NSDUH then is described, followed by an account of the major methodological developments that were implemented in 2002. Finally, each of the chapters and their authors are introduced.
RESULTS/CONCLUSIONS: This volume contains a collection of some recent methodological research carried out under the auspices of the NSDUH project. Publishing these studies periodically provides a resource for survey researchers wishing to catch up on the latest developments from this unique survey. Readers from a variety of backgrounds and perspectives will find these chapters interesting and informative and, it is hoped, useful in their own careers in survey methodological research, drug abuse prevention and treatment, and other areas.
Introduction of an incentive and its effects on response rates and costs in NSDUH
CITATION: Kennet, J., Gfroerer, J., Bowman, K. R., Martin, P. C., & Cunningham, D. (2005). Introduction of an incentive and its effects on response rates and costs in NSDUH. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 7–18). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: In 2002, the National Survey on Drug Use and Health (NSDUH) began offering respondents a $30 cash incentive for completing the questionnaire. This development occurred within the context of a name change and other methodological improvements to the survey and resulted in significantly higher response rates. Moreover, the increased response rates were achieved in conjunction with a net decrease in costs incurred per completed interview. This chapter presents an analysis of response rate patterns by geographic and demographic characteristics, as well as interviewer characteristics, before and after the introduction of the incentive. Potential implications for other large-scale surveys also are discussed.
METHODS: To demonstrate the effects of the incentive and other changes, a comparison is presented of the response rates, by quarters, in the years before and after the incentive was introduced. These rates then are broken down by geographic area, population density, respondent age, gender, and race/ethnicity. Response rates also are examined with respect to the characteristics of interviewers, including their prior experience on the survey, race/ethnicity, and gender.
RESULTS/CONCLUSIONS: The $30 incentive, with possible help from the other changes that were introduced in January 2002, produced dramatic improvement in the number of eligible respondents who agreed to complete the NSDUH interview. Moreover, the increase in respondent cooperation was accompanied by a decrease in cost per completed interview. Clearly, the adoption of the incentive was beneficial to all involved, with the possible exception of the field interviewers, who required fewer hours to complete their assignments on the project and consequently received less pay.
From these analyses, it appears that incentives had their most pronounced effect among people between the ages of 12 and 25. Because these are known to be the years in which substance use and abuse are most prevalent and have their greatest incidence, it seems the incentive was well spent in terms of capturing the population of most interest. However, serious thought needs to be given to methods for attracting those older than 25. It could be the case that $30 was simply insufficient to attract people who have settled into careers and/or other more rewarding activities, such as child rearing or retirement, or it could be that these people did not participate for other reasons. These reasons will have to be investigated and addressed in order for NSDUH to optimally cover the aging baby-boom generation and other cohorts.
Applying cognitive psychological principles to the improvement of survey data: A case study from the National Survey on Drug Use and Health
CITATION: Kennet, J., Painter, D., Barker, P., Aldworth, J., & Vorburger, M. (2005). Applying cognitive psychological principles to the improvement of survey data: A case study from the National Survey on Drug Use and Health. In Proceedings of the 2005 Joint Statistical Meetings, American Statistical Association, AAPOR - Section on Survey Research Methods, Minneapolis, MN (pp. 3887–3897). Alexandria, VA: American Statistical Association.
PURPOSE/OVERVIEW: The National Survey on Drug Use and Health (NSDUH) collects data on Medicare and Medicaid coverage as part of a general interview conducted after the core drug use measures have been administered. Although the overall estimates derived from the NSDUH Medicare and Medicaid coverage questions have generally appeared credible, it became apparent that among individuals younger than 65 years old, Medicare coverage was overreported and Medicaid coverage was underreported. Among adults aged 65 or older, Medicaid coverage appeared to be highly overreported. These judgments were based on “eyeball” comparisons with estimates from the Survey of Income and Program Participation (SIPP) and the Current Population Survey (CPS), both of which administered more detailed modules on health insurance coverage.
METHODS: The Medicare and Medicaid questions were subject to expert reviews within the context of Tourangeau, Rips, and Rasinski’s (2000) Response Process Model, which posits four processes involved in answering survey questions: comprehension, retrieval, judgment, and response. Three reviewers independently critiqued the questions. Two were survey methodologists having extensive experience with NSDUH’s content and fielding The third reviewer was a cognitive psychologist who was new to the project. The review team met several times over the course of a few days to compare comments and draft a revised pair of items.
RESULTS/CONCLUSIONS: Expert review of the NSDUH question wordings suggested that inadequate establishment of context (i.e., defining terms after using them in the questions) and other syntactic difficulties created excessive demands on working memory. Correction of these problems in the 2003 NSDUH resulted in age group coverage estimates that more closely matched those obtained in the other surveys, which targeted this topic more specifically.
Assessing the reliability of key measures in the National Survey on Drug Use and Health using a test-retest methodology
CITATION: Kennet, J., Painter, D., Hunter, S. R., Granger, R. A., & Bowman, K. R. (2005, November). Assessing the reliability of key measures in the National Survey on Drug Use and Health using a test-retest methodology. Paper presented at the Federal Committee on Statistical Methodology Research Conference, Washington, DC.
PURPOSE/OVERVIEW: The National Survey on Drug Use and Health (NSDUH) is a major source of information on substance use and mental illness prevalence in the United States. It is administered in households to approximately 67,500 individuals annually using a complex, multistage sampling design. Assessing the reliability of estimates produced by NSDUH is of primary importance to those who use these data for research and in the making of policy decisions. In quarters 1 and 2 of 2005, a pretest was carried out in which approximately 200 NSDUH respondents were reinterviewed as an effort to refine the methods to be used in conducting a large-scale reliability field test in 2006. This paper discusses the design and procedural considerations that were taken into account in planning the pretest and upcoming field test.
METHODS: Considerations included the time interval between the test and pretest, the sample size needed for reliability estimates of low prevalence behaviors, whether the sample would be embedded or not in the NSDUH main study, using the same versus different interviewers for the reinterview, increased risk of loss of respondent privacy due to the provision of recontact information, amount of incentive for the reinterview, and others. In addition, preliminary findings from the pretest were discussed that may influence methods employed in the 2006 field test, such as response rates on the reinterview and respondent feedback.
RESULTS/CONCLUSIONS: The reliability study pretest achieved higher than expected reinterview response rates, successfully completed reinterviews within the 5- to 15-day window, displayed a high level of consistency in responses to drug and demographic questions between T1 and T2, received a positive response from respondents, and demonstrated that field interviewers will be able to follow the procedures and protocols in the 2006 reliability study.
The use of monetary incentives in federal surveys on substance use and abuse
CITATION: Kulka, R. A., Eyerman, J., & McNeeley, M. E. (2005). The use of monetary incentives in federal surveys on substance use and abuse. Journal for Social and Economic Measurement, 30(2–3), 233–249.
PURPOSE/OVERVIEW: Empirical research over the past 30 years has shown positive effects of monetary incentives on response rates. This paper discusses the use of incentives specifically for surveys on substance use and abuse.
METHODS: This paper starts by providing a background and review of the current empirical literature on the effects that monetary incentives have on survey response rates, survey statistics, and other practical and operational issues in surveys. Next, two controlled experiments on the effect of incentives on substance use surveys are discussed: the Alcohol and Drug Services Study (ADSS) and the National Household Survey on Drug Abuse (NHSDA). Each of the studies randomized respondents into different incentive categories, including no incentives and two to three levels of increasing incentives.
RESULTS/CONCLUSIONS: The ADSS results revealed that higher incentives correlated with higher cooperation rates, but that the difference in cooperation rates between the levels of incentives was small. The analyses also revealed that the incentives did not affect different subgroups differently. In addition, the results indicated that the use of incentives did not affect the quality of respondents’ answers. The NHSDA results, however, showed that although incentives did increase response rates, they had differing impacts on different subgroups. The results of the analyses on survey response bias were inconclusive. The results of these two studies revealed that more research is needed on this topic to further understand the effect of incentives on survey data quality. The results of these studies are described in more detail in a series of papers appearing in the Journal of Social and Economic Measurement.
Effects of the September 11, 2001, terrorist attacks on NSDUH response rates
CITATION: McNeeley, M. E., Odom, D., Stivers, J., Frechtel, P., Langer, M., Brantley, J., Painter, D., & Gfroerer, J. (2005). Effects of the September 11, 2001, terrorist attacks on NSDUH response rates. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 31–58). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: The purpose of this study was to determine whether the events of September 11th had an effect on screening response rates (SRRs) and interview response rates (IRRs).
METHODS: In addition to the preliminary statistical analysis of response rate data, logistic regression models were run to provide a more refined analysis. Field interviewer (FI) focus groups also were used to assess changes in the logistics of FI activity and in the use of the lead letter. To capture heightened concerns about security, FIs also were asked about increases in controlled access problems and changes in the mode of contact with screening and interview respondents.
RESULTS/CONCLUSIONS: It was found that the New York City consolidated metropolitan statistical area (CMSA) and Washington, DC, primary metropolitan statistical area (PMSA) response rates suffered dramatic decreases following the September 11th terrorist attacks, although the differences in IRR in the Washington, DC, PMSA were shown to be significant only after modeling on a number of factors. The national screening response rate (SRR) also showed a decrease even after removing the New York City CMSA and Washington, DC, PMSA from the sample. This decrease was significant but less dramatic than in the two metropolitan areas.
Interviewer falsification detection using data mining
CITATION: Murphy, J., Eyerman, J., McCue, C., Hottinger, C., & Kennet, J. (2005, October). Interviewer falsification detection using data mining. Presented at Statistics Canada’s 22nd International Symposium Series, Ottawa, Ontario, Canada.
PURPOSE/OVERVIEW: Interviewer falsification is the deliberate creation of survey responses by the interviewer without input from the respondent. Undetected falsification can introduce bias into the population estimates if falsified responses do not match the values that would have been provided by respondents. Currently, the procedures required to detect interviewer fraud can be expensive and draw resources away from the study that could be applied to data quality procedures. Data mining can be used to program falsification propensity checks that may be run on a frequent basis, facilitating timely and relatively inexpensive detection and remediation, and serving as a possible deterrent to falsification.
METHODS: This paper describes an innovative use of data mining on response data and metadata to identify, characterize, and prevent falsification by field interviewers on the National Survey on Drug Use and Health (NSDUH).
RESULTS/CONCLUSIONS: This study yielded mixed results. It clearly demonstrated that there is potential for data mining to be used as a falsification detection tool on NSDUH. In particular, the complementary findings noted in different data resources underscored the value that can be obtained by using a combination of automated search strategies and expert review on the various NSDUH databases. For example, a putative association between difficult-to-complete interviews and falsification was supported by the record of calls (ROC) data, which suggested that the decision to falsify was made after an interviewer had experienced difficulty in contacting a subject, and the finding that an increased number of breakoffs was associated with interviewers who had been placed under increased scrutiny by the data quality team.
Appendix C: Research on the impact of changes in NSDUH methods
CITATION: Office of Applied Studies. (2005). Appendix C: Research on the impact of changes in NSDUH methods. In Results from the 2004 National Survey on Drug Use and Health: National findings (HHS Publication No. SMA 05-4062, NSDUH Series H-28, pp. 145–154). Rockville, MD: Substance Abuse and Mental Health Services Administration.
PURPOSE/OVERVIEW: Although the design of the 2002 through 2004 National Surveys on Drug Use and Health (NSDUH) was similar to the design of the 1999 through 2002 surveys, there were important methodological differences between the 2002 to 2004 NSDUHs and prior surveys, including a change in the survey’s name, the introduction of an incentive, improved data collection quality control procedures and the use of the 2000 decennial census for sample weighting. The results of the 2002 survey suggested that the incentive had an impact on estimates. A panel of survey methodology experts concluded that it would not be possible to measure the effects of each changes separately because of the multiple changes made to the survey simultaneously and recommended that the Substance Abuse and Mental Health Services Administration (SAMHSA) continue its analyses of the 2001 and 2002 data to learn as much as possible about the impacts of each of the methodological improvements. The purpose of this appendix is to summarize the studies of the effects of the 2002 method changes and to discuss the implications of this body of research for analysis of NSDUH data.
METHODS: Early analyses were presented to a panel of survey design and survey methodology experts convened on September 12, 2002. The analyses included (1) retrospective cohort analyses; (2) response rate pattern analyses; (3) response rate impact analyses; (4) analyses of the impact of new census data; and (5) model-based analyses of protocol changes, name change, and incentives. Since 2002, two additional analyses were conducted that extend those described above: more in-depth incentive experiment analyses and further analysis of the 2001 field interventions.
RESULTS/CONCLUSIONS: A summary of all of the results of the 2002 NSDUH method analyses was presented to a second panel of consultants on April 28, 2005. The panel concluded that there was no possibility of developing a valid direct adjustment method for the NSDUH data and that SAMHSA should not compare 2002 and later estimates with 2001 and earlier estimates for trend assessment. The panel suggested that SAMHSA make this recommendation to users of NSDUH data.
Analysis of NSDUH record of call data to study the effects of a respondent incentive payment
CITATION: Painter, D., Wright, D., Chromy, J. R., Meyer, M., Granger, R. A., & Clarke, A. (2005). Analysis of NSDUH record of call data to study the effects of a respondent incentive payment. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 19–30). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: The National Survey on Drug Use and Health (NSDUH) employs a multistage area probability sample to produce population estimates of the prevalence of substance use and other health-related issues. Letters are sent to selected households to alert potential respondents of the interviewer’s future visit. Interviewers then visit the residence to conduct a brief screening, which determines whether none, one, or two individuals are selected from the household. To maximize response rates, interviewers may make several visits to a household to obtain cooperation (the term “call” is used for “visits” from this point on with the understanding that this is a face-to-face survey; telephones are not used to contact potential respondents). In-person interviews are conducted with selected respondents using both computer-assisted personal interviewing (CAPI) and audio computer-assisted self-interviewing (ACASI). Sensitive questions, such as those on illicit drug use, are asked using the ACASI method to encourage honest reporting. A detailed description of the NSDUH methodology is described elsewhere (RTI International, 2003). During the late 1990s, NSDUH experienced a slight decline in response rates. A closer examination of the data revealed stable noncontact patterns, but increasing refusal rates (Office of Applied Studies [OAS], 2001). This implied that sample members were beginning to become less likely to participate once they were contacted. This was compounded by the need to hire a large number of new interviewers who may not have had the confidence or skills to overcome respondent refusals.
METHODS: Given the slight decline in response rates, and the expectation that this trend might become more serious, NSDUH staff designed an experiment to evaluate the effectiveness of monetary incentives in improving respondent cooperation. A randomized, split-sample, experiment was conducted during the first 6 months of data collection in 2001. The experiment was designed to compare the impact of $20 and $40 incentive treatments with a $0 control group on measures of respondent cooperation and survey costs.
RESULTS/CONCLUSIONS: The results showed that both the $20 and $40 incentives increased overall response rates while producing significant cost savings when compared with the $0 control group (Eyerman, Bowman, Butler, & Wright, 2002). Preliminary analysis showed no statistically detectable effects of the three incentive treatments on selected substance use estimates. Subsequent analysis showed some positive and negative effects depending on the substance use measure when the $20 and $40 treatments were combined and compared with the $0 control group (Wright, Bowman, Butler, & Eyerman, 2002). Based on the outcome of the 2001 experiment, NSDUH staff implemented a $30 incentive in 2002. Their analysis showed that a $30 incentive would strike a balance between gains in response rates and cost savings. This chapter analyzes the effect of the new $30 incentive on the data collection process as measured by record of calls (ROC) information. The effect of the incentives implemented in 2002 on response rates and costs is discussed by Kennet et al. in Chapter 2 of this volume.
Analyzing audit trails in NSDUH
CITATION: Penne, M. A., Snodgrass, J., & Barker, P. (2005). Analyzing audit trails in NSDUH. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health (HHS Publication No. SMA 05-4044, Methodology Series M-5, pp. 105–120). Rockville, MD: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.
PURPOSE/OVERVIEW: For the National Survey on Drug Use and Health (NSDUH), the interview sections on substance use and other sensitive topics were changed in 1999 from self-administered paper-and-pencil interviewing (PAPI) to audio computer-assisted self-interviewing (ACASI). These changes were prompted by research that showed that computer-assisted interviewing (CAI) questionnaires reduced input errors. Research also showed that use of ACASI increased comprehension for less literate respondents and, by increasing privacy, resulted in more honest reporting of illicit drug use and other sensitive behaviors.
METHOD: In this chapter, the earlier work is briefly described, possible methods for streamlining the data-processing portion are discussed, and the use of audit trails in the 2002 survey is focused on to investigate three aspects of data quality: question timing, respondent breakoffs, and respondent “backing up” to change prior responses.
RESULTS/CONCLUSIONS: Timing data showed that when measured against a gold standard (GS) time, field interviewers (FIs) were spending approximately the correct amount of time with the very beginning of the interview at the introduction to the CAI instrument screen. However, once past this point, they spent less time than the GS on several important aspects of the questionnaire, such as setting up the calendar, setting up the ACASI tutorial, completing the verification form, and ending the interview with the respondent. Conversely, they were taking longer than the GS in ending the ACASI portion of the interview.
Modeling context effects in the National Survey of Drug Use and Health (NSDUH)
CITATION: Wang, K., Baxter, R., & Painter, D. (2005). Modeling context effects in the National Survey of Drug Use and Health (NSDUH). In Proceedings of the 2005 Joint Statistical Meetings, American Statistical Association, Section on Survey Research Methods, Minneapolis, MN (pp. 3646–3651). Alexandria, VA: American Statistical Association.
PURPOSE/OVERVIEW: Context effects occur for multiple reasons. Primarily, the response to a question is affected by information that is not part of the question itself, and the cognitive process has been affected because of the content of the preceding questions. In terms of questionnaire changes, context effects may be said to take place between two survey questions when a change introduced to the first (or contextual) item affects the response process for the subsequent (target) item, which in turn may lead to a different response than if the change had not been made. Comparatively little work has been done to examine if different types of respondents might be more or less susceptible to changes in context.
METHODS: In this paper, the authors used data from the National Survey on Drug Use and Health (NSDUH) for 2002 and 2003 to determine if some types of respondents were more greatly affected by a contextual change than other respondents.
RESULTS/CONCLUSIONS: The authors found that the removal of item SEN13A in the 2003 NSDUH had an effect on responses to item SEN13B in 2003 as compared with previous years. What remained unclear was the nature of the means by which the removal of SEN13A affected responses to SEN13B. The estimated models identified current cigarette users (who were not also current marijuana users) as respondents who were especially likely to respond to item SEN13B with responses of “neither approve nor disapprove” in 2003 than in 2002. The models also correctly identified respondents who previously used cigarettes and had never used marijuana as more likely to respond to SEN13B with “neither approve nor disapprove” in 2003 than in 2002.
Are two feet in the door better than one? Using process data to examine interviewer effort and nonresponse bias
CITATION: Wang, K., Murphy, J., Baxter, R., & Aldworth, J. (2005, November). Are two feet in the door better than one? Using process data to examine interviewer effort and nonresponse bias. Paper presented at the Federal Committee on Statistical Methodology Research Conference, Washington, DC.
PURPOSE/OVERVIEW: The authors examined the use of administrative call record data from the National Survey on Drug Use and Health (NSDUH) in order to address interviewing issues. The authors first described NSDUH and the available call record data, then examined how the calling strategies of interviewers can affect contact and cooperation rates. The authors also conducted analyses to examine the relationships between the volume of call attempts and survey estimates and, in turn, the potential for bias due to nonresponse.
METHODS: The authors took the first steps in analyzing NSDUH process data to address questions regarding interviewer efforts and effects on response rates and survey estimates.
RESULTS/CONCLUSIONS: The authors found that calling times, defined by the time of the call (before or after 4:00 p.m.) and the day of the week (weekday vs. weekend) were related to contact on the first attempt for the screener. They also found evidence that using less intensive follow-up efforts would not necessarily lead to survey estimates that differed significantly from estimates obtained with greater effort. They also suggested that reduction of the interviewing effort on a per case basis could lead to reductions in data collection costs.
Decomposing the total variation in a nested random effects model of neighborhood, household, and individual components when the dependent variable is dichotomous: Implications for adolescent marijuana use
CITATION: Wright, D., Bobashev, G. V., & Novak, S. P. (2005). Decomposing the total variation in a nested random effects model of neighborhood, household, and individual components when the dependent variable is dichotomous: Implications for adolescent marijuana use. Drug and Alcohol Dependence, 78(2), 195–204. [PubMed: 15845323]
PURPOSE/OVERVIEW: Multilevel modeling techniques have become a useful tool that enables substance abuse researchers to more accurately identify the contribution of multiple levels of influence on drug-related attitudes and behaviors. However, it is difficult to determine the relative importance of the different hierarchical levels because the variance components estimation involves calculations using a log-odds metric at the lowest level of estimation in the case of dichotomous outcomes,
METHODS: The authors presented methods that were introduced by Goldstein and Rasbash (1996) to convert the variance components from the log-odds to the probability metric. These methods have a few advantages in that they provide a more logical and interpretable way to examine variation for nonlinear outcomes, which tend to be heavily utilized in substance use research. With data from the 1999 National Household Survey on Drug Abuse, the authors partitioned variation among individual, household, and neighborhood levels for the binary outcome of past year marijuana use to illustrate this approach. The authors also conducted a stability analysis to examine the robustness across different estimation procedures commonly available in commercial multilevel software packages. Furthermore, the authors partitioned the variance components using a conventional continuously distributed outcome and compared the relative magnitudes across binary and continuous outcomes.
RESULTS/CONCLUSIONS: The authors reported that both binary and continuous indicators of drug use could be used for characterizing use within households and neighborhoods in a similar statistical way providing interpretable results. They demonstrated that the inverse logit method was applicable to any number of hierarchical levels and variable sizes of the clusters.
Non-response bias from the National Household Survey on Drug Abuse incentive experiment
CITATION: Wright, D., Bowman, K., Butler, D., & Eyerman, J. (2005). Non-response bias from the National Household Survey on Drug Abuse incentive experiment. Journal for Social and Economic Measurement, 30(2–3), 219–231.
PURPOSE/OVERVIEW: In a preliminary experiment conducted in the 2001 National Household Survey on Drug Abuse (NHSDA), it was concluded that providing incentives increased response rates; therefore, a $30 incentive was used in the subsequent 2002 NHSDA. The purpose of this paper is to explore the effect that incentive had on nonresponse bias.
METHODS: The sample data were weighted by likelihood of response between the incentive and nonincentive cases. Next, a logistic regression model was fit using substance use variables and controlling for other demographic variables associated with either response propensity or drug use.
RESULTS/CONCLUSIONS: The results indicate that for past year marijuana use, the incentive is either encouraging users to respond who otherwise would not respond, or it is encouraging respondents who would have participated without the incentive to report more honestly about drug use. Therefore, it is difficult to determine whether the incentive money is reducing nonresponse bias, response bias, or both. However, reports of past year and lifetime cocaine did not increase in the incentive category, and past month use of cocaine actually was lower in the incentive group than in the control group.
- A test of the item count methodology for estimating cocaine use prevalence
- Model-based estimation of drug use prevalence using item count data
- Evaluation of follow-up probes to reduce item nonresponse in NSDUH
- Association between interviewer experience and substance use prevalence rates in NSDUH
- Comparing NSDUH income data with income data in other datasets
- Incidence and impact of controlled access situations on nonresponse
- The differential impact of incentives on cooperation and data collection costs: Results from the 2001 National Household Survey on Drug Abuse incentive experiment
- Processing of race and ethnicity in the 2004 National Survey on Drug Use and Health
- Development of a Spanish questionnaire for NSDUH
- Results of the variance component analysis of sample allocation by age in the National Survey on Drug Use and Health
- Forward telescoping bias in reported age of onset: An example from cigarette smoking
- Evaluating and improving methods used in the National Survey on Drug Use and Health
- Introduction. In J. Kennet & J. Gfroerer (Eds.), Evaluating and improving methods used in the National Survey on Drug Use and Health
- Introduction of an incentive and its effects on response rates and costs in NSDUH
- Applying cognitive psychological principles to the improvement of survey data: A case study from the National Survey on Drug Use and Health
- Assessing the reliability of key measures in the National Survey on Drug Use and Health using a test-retest methodology
- The use of monetary incentives in federal surveys on substance use and abuse
- Effects of the September 11, 2001, terrorist attacks on NSDUH response rates
- Interviewer falsification detection using data mining
- Appendix C: Research on the impact of changes in NSDUH methods
- Analysis of NSDUH record of call data to study the effects of a respondent incentive payment
- Analyzing audit trails in NSDUH
- Modeling context effects in the National Survey of Drug Use and Health (NSDUH)
- Are two feet in the door better than one? Using process data to examine interviewer effort and nonresponse bias
- Decomposing the total variation in a nested random effects model of neighborhood, household, and individual components when the dependent variable is dichotomous: Implications for adolescent marijuana use
- Non-response bias from the National Household Survey on Drug Abuse incentive experiment
- 2005 - National Survey on Drug Use and Health2005 - National Survey on Drug Use and Health
Your browsing activity is empty.
Activity recording is turned off.
See more...