Logo of amjphAmerican Journal of Public Health Web SiteAmerican Public Health Association Web SiteSubmissionsSubscriptionsAbout Us
Am J Public Health. 2002 February; 92(2): 294–297.
PMCID: PMC1447060

The Differential Effects of Face-to-Face and Computer Interview Modes


Objectives. This study assessed the differential effects of face-to-face interviewing and audio-computer assisted self-interviewing (audio-CASI) on categories of questions.

Methods. Syringe exchange program participants (n = 1417) completed face-to-face interviews or audio-CASI. The questionnaire was categorized into the groups “stigmatized behaviors,” “neutral behaviors,” and “psychological distress.” Interview modes were compared for questions from each category.

Results. Audio-CASI elicited more frequent reporting of “stigmatized behaviors” than face-to-face interviews. Face-to-face interviewing elicited more frequent reporting of “psychological distress” than audio-CASI.

Conclusions. Responding to potentially sensitive questions should not be seen as merely “providing data,” but rather as an activity with complex motivations. These motivations can include maintaining social respect, obtaining social support, and altruism. Ideally, procedures for collecting self-report data would maximize altruistic motivation while accommodating the other motives.

Many areas of health and behavioral research rely on self-report data, despite the knowledge that such data may not always be accurate and complete. Factors that motivate participation in research are complex1 and may lead to differential responding within different interview modes. For example, response bias can occur as a result of respondents' desire to present themselves in a favorable light.2

There is substantial evidence that selfreports of drug use and other stigmatized behaviors vary by mode of interview.3–5 Studies have shown that the level of information revealed by a respondent is positively related to the level of privacy of the interview. Methodological problems regarding self-report questionnaires can have a profound impact in fields such as HIV/AIDS research, where such questionnaires are the primary means of obtaining information on risk behaviors.

New interview methods are being developed to improve the quality of self-report data. One such innovation is computer assisted self-interviewing (CASI), in which respondents read survey questions on a computer screen and then directly enter their responses into the computer. In audio-CASI, the questions are presented on the computer screen and read to the respondent through headphones, facilitating use by respondents who are not literate in the interview language.

Several studies have addressed the effects of CASI, generating complex and, at times, contradictory findings. Comparisons of CASI with face-to-face interviewing have concluded that subjects completing computer interviews disclose more socially undesirable attitudes, facts, and behaviors.6–9 Others have reported contrary information, finding that respondents reported more socially undesirable behavior in the face-to-face interview modes than with CASI.10 Little or no difference between CASI and face-to-face interviews also has been reported.11–13

A recent study by Williams et al.14 comparing the reliability of self-reports of risk behaviors using CASI and face-to-face interviewing underscores the complexity of mode effects. The investigators did not find that CASI elicited more reporting of risk behaviors—the 2 modes were comparable in terms of the reliability of self-reports of HIV risk behaviors—but biases were detected in the reported number of times participants engaged in risk behavior.

Additionally, there may be some circumstances in which respondents find answering to a computer to be “impersonal,” and this may affect reporting of specific attitudes and behaviors. In one study, individuals interviewed face-to-face were more likely to report psychiatric symptoms and depression than individuals interviewed by telephone—which, like audio-CASI, is a more anonymous mode.15

Increased disclosure of psychiatric symptoms in a face-to-face interview may demonstrate the use of the interview process by patients as a “cry for help.”16,17 Respondents may use the interview as an opportunity to garner sympathy or social support for their emotional problems.18,19 Thus, the interview process may, in fact, serve as a medium for interpersonal connection, motivating respondents to express their true problems. Reducing the role of the human interviewer may therefore make the interview process “impersonal” for respondents and may reduce the likelihood that they will disclose the types of psychological distress for which sympathy or social support might be expected. No study to date, however, has addressed the effects of audio-CASI on distress questions.

We examined the effects of interview mode on self-disclosure for heavily “stigmatized behaviors,” for which embarrassment would be very likely and social support unlikely, and for “psychological distress,” for which social support would be likely and embarrassment less likely.


This report is a secondary analysis of data collected to assess the differences between face-to-face interviewing and audio-CASI on self-reports of HIV risk behavior among injecting drug users attending syringe exchange programs in 4 US cities. A full presentation of the methods is provided by Des Jarlais et al.8

Data Collection

Interviews were completed during 1997 and 1998 with participants of syringe exchange programs in New York, NY; Chicago, Ill; Tacoma, Wash; and Los Angeles, Calif. Participants were recruited from exchange lines. At each site, field workers used random-number tables to select a number, n (from 1 to 6). The nth person in line to exchange syringes was then asked if he or she was willing to participate in a research study. The study was explained and an oral informed consent was obtained.

Audio-CASI and face-to-face interview modes were used in alternate weeks at each exchange. For the audio-CASI interviews, the staff instructed the respondent on the use of the computer and then allowed the respondent to complete the interview in private. For the face-to-face interviews, paper-and-pencil questionnaires were used, and data collection staff read each question and recorded the respondent's answers using traditional interviewing techniques.

The original interview instrument contained approximately 280 questions with items on sociodemographic characteristics, attitudes toward program operations and staff, drug use, sexual behaviors, and physical and mental health histories. For this secondary analysis, the questionnaire was abbreviated to 90 questions. Questions regarding drug use and HIV risk behaviors during the prior 30 days were retained, while the identical questions pertaining to the 30 days before using the exchange were eliminated. In the sections on drug use and sexual behaviors, where gateway questions were followed by a series of specific questions, only the gateway questions were retained. Finally, in the sections regarding program operations and attitudes toward staff, only every third question was retained.

To test our hypotheses, we needed to classify interview questions into 3 categories: stigmatized behaviors (category A), neutral behaviors (category B), and emotional distress (category C). We recruited 3 raters who were generally familiar with injecting drug users but were not familiar with either our hypotheses or the data. Their familiarity with injecting drug users served as background, but they were instructed to follow the criteria outlined in the rater instructions for categorizing the questions. The raters had high agreement with respect to classifying the questionnaire items into the 3 categories (pairwise κ = 0.745, 0.728, and 0.777). We chose to use only questionnaire items for which there was complete agreement among the 3 raters. This gave 51 stigmatized behavior items, 20 neutral items, and 4 emotional distress items.

We analyzed data from the first interview with each participant, using SPSS version 8.0 (SPSS Inc, Chicago, Ill). Categorical responses were classified as “presence” or “absence” of each behavior. Chi-square tests were used to compare the proportion of participants who reported the behavior on audio-CASI with the proportion who reported the behavior in the face-to-face mode. t tests were used for continuous variables. We hypothesized that audio-CASI would elicit a greater proportion of affirmative responses for category A, there would be no difference for category B, and face-to-face interviews would elicit a greater proportion of affirmative responses for category C. We calculated odds ratios with 95% confidence intervals to assess the degree of difference between behaviors reported by the 2 interview methods. We calculated adjusted odds ratios controlling for sex, age, race, and education.


Ninety-five percent of the syringe exchange program participants who were approached agreed to participate in the study. Agreement to participate did not vary by mode, and the amount of missing data was minimal (<1%). The sample consisted of 1581 interviews. Using the unique identifiers created by the respondents, we removed all duplicate interviews from the data set. The final sample included 1417 respondents, 688 of whom completed the audio-CASI interview and 729 of whom were interviewed face-to-face (see Table 1 [triangle] for demographics).

—Demographic Characteristics of Participants of Study Assessing Effects of Face-to-Face Interviewing vs Audio-Computer Assisted Self-Interviewing (Audio-CASI)

Category A contained 51 items, 40 of which were analyzed with χ2 tests and 11 of which used t tests. Seventy-three percent of these questions demonstrated increased reporting to audio-CASI (P < .05). Twenty questions were included in category B (P < .05). All 4 questions in category C were analyzed with χ2 tests, and 75% demonstrated increased reporting in the face-to-face mode. Examples of items with interview mode differences for the 3 categories of questions are presented in Table 2 [triangle]. Percentage differences of reported behaviors between audio-CASI and face-to-face interviewing and odds ratios are included.

—Behaviors Reported by Participants for Each Response Category, by Interview Method


There are 3 clear limitations to this study. First, we did not have any method for verifying the self-reported data. Verification of the sexual and drug-injecting behaviors in category A would be both impractical and a severe invasion of the subjects' privacy. Verification of the subjective feeling states in category C would be even more difficult. Still, it is difficult to imagine why large numbers of subjects would report either the stigmatized behaviors or the psychological distress if they were not engaging in or experiencing these behaviors and problems. Subjects in the face-to-face interviewing condition might plausibly exaggerate the extent of their psychological distress—in the hope of receiving sympathy or social support—but it does not appear plausible that large numbers of subjects would report the problems if they were not experiencing them to some degree.

Second, regarding category C, the small number of questions and the inclusion of questions solely on depression limit the generalizability of these findings. Whether similar results would be obtained for other types of psychological distress remains to be determined in future research.

Finally, participants in needle exchange programs represent a unique population, and whether the findings of this study are replicable in other populations remains open to future research.

Despite these limitations, the interview mode differences between the “stigmatized” HIV risk behaviors and “psychological distress” were notable. These differences reached conventional statistical significance levels in opposite directions—significantly more reporting of stigmatized behaviors with audio-CASI and significantly more reporting of “psychological distress” in face-to-face interviewing.

An examination of the “psychological distress” questions highlights an important point regarding the use of self-administered questionnaires in general, and computer selfadministered questionnaires in particular. It appears that the process of collecting information regarding depression is facilitated by the face-to-face interview process. It is possible that “impersonality” bias for particular types of questions does, in fact, exist. Respondents may underreport to the computer because the impersonal nature of a computer interview is incongruent with the personal nature of questions regarding one's emotional or mental health. In the current study, only depression questions seemed to be biased in that way, although other forms of data may also suffer from “impersonality” bias, particularly those related to psychological and mental health issues.

This study examined group differences in responding to audio-CASI and face-to-face interviewing. There may also be important individual differences in what is viewed as “stigmatized” vs a “problem for which social support is needed,” in the need to hide stigmatized behaviors, and in seeking social support. The context in which the interviewing occurs, as well as interviewer and respondent characteristics, may also affect the degree of stigmatization and the perceived likelihood of obtaining social support. The specific wording of a question may also determine whether the behavior is perceived as stigmatized or as a personal problem for which social support might be obtained. Further research will be needed to explore these issues.

Methodological and conceptual advances in collecting self-report data offer important opportunities for advancing behavioral and health-related science. From the research to date, audio-CASI appears to be an important advance for collecting data about stigmatized behaviors. The relationships between data collection modes and self-disclosure of various potentially sensitive behaviors will need to be systematically explored if the promise of audio-CASI is to be fulfilled.

Responding to potentially sensitive questions should not be seen as merely “providing data,” but rather as an activity with complex motivations. The motivations can include maintaining social respect, coping with stress, and altruism by providing accurate and valid data on issues such as preventing HIV infection. Ideally, procedures for collecting selfreport data would maximize the opportunities for altruistic motivation while accommodating the likely other motives.


This research was supported by grant R01 DA 09536 from the US National Institute on Drug Abuse.

An earlier version of this report was presented at the 128th annual meeting of the American Public Health Association, Boston, Mass, November 12–16, 2000.

The authors gratefully thank Sharon Schwartz, Bruce Link, Seiji Newman, Molly Yancovitz, and Julie Alperen for contributions to this report.

This study was approved by the Committee on Scientific Affairs of the Beth Israel Medical Center, New York, NY, which serves as the Institutional Review Board.


J. C. Newman was responsible for the overall formulation of the hypothesis, data analysis, and paper preparation. D. C. Des Jarlais was the principal investigator of the study and assisted in all aspects of the paper's preparation. C. F. Turner and J. Gribble developed and implemented audio-CASI for the original study. P. Cooley was responsible for audio-CASI programming of the original study. D. Paone was responsible for instrument development and training of data collection staff for the original study.

Peer Reviewed


1. Groves RM, Cialdini RB, Couper MP. Understanding the decision to participate in a survey. Public Opinion Q. 1992;56:475–495.
2. Catania JA. A framework for conceptualizing reporting bias and its antecedents in interviews assessing human sexuality. J Sex Res. 1999;36:25–38.
3. Catania JA, Gibson DR, Chitwood DD, Coates TJ. Methodological problems in AIDS behavioral research: influences on measurement error and participation bias in studies of sexual behavior. Psychol Bull. 1990;108:339–362. [PubMed]
4. Aquilino WS. Interview mode effects in surveys of drug and alcohol use. Public Opinion Q. 1994;58:210–240.
5. Turner CF, Lessler JT, George B, Hubbard M, Witt M. Effects of mode of administration and wording on data quality. In: Turner CF, Lessler JT, Gfroerer JC, eds. Survey Measurement of Drug Abuse: Methodological Studies. Rockville, Md: National Institutes of Health; 1992:221–244. DHHS publication ADM 92–1929.
6. Couper MP, Rowe B. Evaluation of a computerassisted self-interview component in a computerassisted personal interview survey. Public Opinion Q. 1996;60:89–107.
7. Gribble JN, Miller HG, Cooley PC, Catania JS, Pollack L, Turner CF. The impact of T-CASI interviewing on reported drug use among men who have sex with men. Subst Use Misuse. 2000;35:869–890. [PubMed]
8. Des Jarlais DC, Paone D, Milliken J, et al. Audio-computer interviewing to measure risk behaviour for HIV among injecting drug users: a quasi-randomised trial. Lancet. 1999;353:1657–1661. [PubMed]
9. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent sexual behavior, drug use and violence: increased reporting with computer survey technology. Science. 1998;280:867–873. [PubMed]
10. Tourangeau R, Rasinski K, Jobe JB, Smith TW, Pratt WF. Sources of error in a survey on sexual behavior. J Off Stat. 1997;13:341–365.
11. Sanders GD, Owens DO, Paten N, Cardinally AB, Sullivan AN, Ease RF. A computer-based interview to identify HIV risk behaviors and to assess patient preferences for HIV-related health states. In: Ozbolt JG, ed. Proceedings From the Annual Symposium of Computer Applications in Medical Care. 18th ed. New York, NY: Institute of Electrical and Electronics Engineers; 1994:20–24. [PMC free article] [PubMed]
12. Hasley S. A comparison of computer-based and personal interviews for the gynecologic history update. Obstet Gynecol. 1995;85:494–498. [PubMed]
13. Webb PM, Zimet GD, Fortenberry JD, Blythe MJ. Comparability of a computer-assisted versus written method for collecting health behavior information from adolescent patients. J Adolesc Health. 1999;24:383–388. [PubMed]
14. Williams ML, Freeman RC, Bowen AM, et al. A comparison of the reliability of self-reported drug use and sexual behaviors using computer-assisted versus face-to-face interviewing. AIDS Educ Prev. 2000;12:199–213. [PubMed]
15. Henson R, Cannell CF, Roth A. Effect of interview mode on reporting of moods, symptoms, and need for social approval. J Soc Psychol. 1978;105:123–129.
16. Dahlstrom W, Welsh G, Dahlstrom L. An MMPI Handbook. Vol 1. Minneapolis: University of Minnesota Press; 1972.
17. Marks P, Seeman W, Haller D. The Actuarial Use of the MMPI With Adolescents and Adults. Baltimore, Md: Williams & Wilkins; 1974.
18. Veroff J, Veroff JB. Social Incentives: A Life-Span Developmental Approach. New York, NY: Academic Press; 1980.
19. Hill CA. Affiliation motivation: people who need people but in different ways. J Pers Soc Psychol. 1987;52:1008–1018. [PubMed]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • MedGen
    Related information in MedGen
  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...