• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Interpers Violence. Author manuscript; available in PMC Aug 1, 2012.
Published in final edited form as:
PMCID: PMC3126890
NIHMSID: NIHMS253139

ACASI and Face-to-Face Interviews Yield Inconsistent Estimates of Domestic Violence Among Women in India: The Samata Health Study 2005-2009

Abstract

Background

Audio computer-assisted self-interviews (ACASI) are increasingly used in health research to improve the accuracy of data on sensitive behaviors. However, evidence is limited on its use among low-income populations in countries like India and for measurement of sensitive issues such as domestic violence.

Method

We compared reports of domestic violence and three less sensitive behaviors related to household decision making and spousal communication in ACASI and face-to-face interviews (FTFI) among 464 young married women enrolled in a longitudinal study of gender-based power and adverse health outcomes in low-income communities in Bangalore, India. We used a test-retest design. At the 12-month study visit, we elicited responses from each participant through FTFI first, followed by ACASI. At the 24-month visit, we reversed the order, implementing ACASI first, followed by FTFI. Univariable log-linear regression models and kappa statistics were used to examine ACASI’s effects on self-reports.

Results

Regression results showed significantly lower reporting in ACASI relative to FTFI at both visits, including for domestic violence (12-month risk ratio [RR] = 0.61, 95% CI = 0.52, 0.73; 24-month RR = 0.74, 95% CI = 0.62, 0.89). Response agreement between interview modes, calculated by kappa scores, was universally low, though highest for domestic violence (12-month κ = 0.45; 24-month κ = 0.48). Older age and greater educational attainment appeared associated with higher response agreement.

Conclusions

Greater reporting in FTFI may be due to social desirability bias for the less sensitive questions and perceptions of therapeutic benefit for domestic violence. These results cast doubt on the appropriateness of using ACASI for measurement of sensitive behaviors in India.

Keywords: Computer interviewing, Data collection, India, Women’s health, Test-retest

Introduction

Audio computer-assisted self-interviews (ACASI) are increasingly being used in behavioral health research, most frequently for measuring sexual risks and illegal drug use. Advantages of ACASI include improved reporting of sensitive behaviors (Bernabe-Ortiz et al., 2008; Ghanem, Hutton, Zenilman, Zimba, & Erbelding, 2005; Hewett et al., 2008; Le, Blum, Magnani, Hewett, & Do, 2006; Metzger et al., 2000; Schroder, Carey, & Vanable, 2003; Simoes, Bastos, Moreira, Lynch, & Metzger, 2006; Tourangeau & Smith, 1996; Turner et al., 1998; van Griensven et al., 2006), fewer missing data (Jaspan et al., 2007; Johnson et al., 2001; Kurth et al., 2004), and enhanced participant experience (Edwards et al., 2007; Kurth et al., 2004; Metzger et al., 2000). However, evidence is limited and inconsistent on the extent to which improvements in data quality can result from using ACASI in populations in low-income countries or with limited experience using computers (Jennings, Lucenko, Malow, & Devieux, 2002; Mensch, Hewett, & Erulkar, 2003; Mensch, Hewett, Gregory, & Helleringer, 2008; Newman et al., 2002; NIMH Multisite HIV/STD Prevention Group, 2008) and specifically for measurement of domestic violence (Kim, Dubowitz, Hudson-Martin, & Lane, 2008; MacMillan et al., 2006; Rhodes et al., 2006; Rhodes, Lauderdale, He, Howes, & Levinson, 2002).

To date, a total of three studies on ACASI use in India have been published. All were focused on sexual behaviors, and, notably, improvements in the reporting of these behaviors were not observed uniformly. For example, in a study comparing ACASI and face-to-face interviews (FTFI) in assessing sexual behaviors among male and female adolescents in low-income urban neighborhoods, females consistently reported sexual behaviors less frequently in ACASI (Jaya, Hindin, & Ahmed, 2008). A second study with male college students and residents of low-income urban communities compared self-administered questionnaires, ACASI, and FTFI. The researchers concluded that, although ACASI improved reporting of sensitive behaviors among highly educated young men, its ability to effectively elicit such information from participants with less education was questionable (Potdar & Koenig, 2005). A third study on the feasibility of ACASI for measuring drug use and HIV risk behaviors among adult men from India, China, Peru, and Russia found that participants from India, who were recruited from slum communities, gave internally inconsistent responses, took the most time to complete ACASI, and were least likely to choose ACASI as the interview mode to optimize honesty and comfort (NIMH Collaborative HIV/STD Prevention Trial Group, 2007). To our knowledge, no study in India has assessed the feasibility and reliability of ACASI data collection with young, married adult women—a group identified as especially vulnerable to a range of adverse health outcomes that have traditionally been measured via FTFI (Barua & Kurz, 2001; George, 2002; Jejeebhoy, 1998; Santhya, Haberland, Ram, Sinha, & Mohanty, 2007). Domestic violence, which is liable to be underreported (Ellsberg, Heise, Pena, Agurto, & Winkvist, 2001), is one such adverse health outcome (Rocca, Rathod, Falle, Pande, & Krishnan, 2008).

In response to the limited knowledge base around ACASI in India, and specifically for assessing domestic violence, we examined the performance and acceptability of ACASI against FTFI among young married women residing in low-income communities in the city of Bangalore. Furthermore, we improved on other studies of ACASI conducted in India by examining the effect of interview mode and administration order on responses to both sensitive and less sensitive questions. This research was part of a prospective cohort study that examined the association between gender-based power and women’s risk for domestic violence and HIV and other sexually transmitted infections. Analyses from the full FTFI data indicated high levels of domestic violence (Krishnan, Rocca, et al., 2009; Rocca et al., 2008) and much lower reports of sexual risks; as such the ACASI question on recent spousal violence is our focus here. The four hypotheses we examined were as follows:

  • Hypothesis 1: Participants will be more likely to respond in the affirmative to experience of domestic violence in ACASI, regardless of order of administration. That is, we expected nonequivalence of responses between interview modes.
  • Hypothesis 2: Participants will respond similarly to three less sensitive behaviors (participating in health decision making, participating in financial decision making, and having open spousal communication) in both ACASI and FTFI, regardless of order of administration. Thus, we expected that for these questions, response agreement will indicate equivalence between modes.
  • Hypothesis 3: Response agreement will increase over time.
  • Hypothesis 4: Age and education will be associated with response agreement.

Method

Study Setting and Design

Details about the main study have been described elsewhere (Rocca et al., 2008). Briefly, building on formative qualitative research (Krishnan, Iyengar, et al., 2005), a sample of 744 Tamil- or Kannada-speaking married women aged 16 to 25 were recruited from two urban communities and followed for 24 months. Annual study visits (a total of three visits over 2 years) included an FTFI, a medical examination, and reproductive health care and counseling. Interviews were conducted by locally recruited and trained female staff, with similar sociodemographic backgrounds as participants. All data collection activities took place in a private room in the local health center.

Eligibility criteria for the ACASI study included returning for the 12-month follow-up visit, consenting to the computer-based interview, and being fluent in Tamil. For ease of consent, participants in the ACASI study had to be at least 18 years of age. Out of the 653 participants who returned for the 12-month visit, 501 (78%) were eligible to participate in the ACASI study. Terms of study participation were explained orally and described on a printed consent form; informed consent was obtained from all study participants via signature or thumb print. Study protocols were approved by the human subjects protection committees of the Indian Institute of Management, Bangalore; University of California, San Francisco; and RTI International, San Francisco.

We used a test-retest design. At the 12-month study visit, we elicited responses from each participant through FTFI first, followed by ACASI. At the 24-month visit, we reversed the order, implementing ACASI first, followed by FTFI. For the ACASI, a study interviewer first sat with the participant to complete four nonsensitive practice questions (e.g., “Out of these colors, what is your favorite?”), after which the interviewer left the room.

Based on data collected in the 12-month FTFI, power calculations for our primary hypothesis indicated that we had 80% power to detect a 5.2% difference in domestic violence reports by interview mode and 90% power for a 6.1% difference.

ACASI Instrument Development

The ACASI instrument was programmed using Ci3 software (Sawtooth Technologies, Northbrook, IL). The instrument consisted of 32 questions drawn from the FTFI, including the four for this analysis (Table 1). Each question was identical across interview modes and study visits. The audio tracks used a female voice speaking in Tamil. Participants keyed in responses using color-coded buttons (Figure 1).

Figure 1
ACASI screenshot for the question “In the last six months have you been hit, kicked, or beaten by your husband for any reason? If No press the red button. If Yes press the green button.”
Table 1
Description of Measures

We conducted a pilot study between May and June 2006 with 18 women not enrolled in the main study, but similar in sociodemographic background to study participants. Based on feedback from pilot study participants, we expanded our description of the protection of confidentiality on the consent form and used more easily identifiable colors for response options. We were also able to identify which questions were considered sensitive and less sensitive.

Analysis

First, we compared the proportion of participants who responded in the affirmative to each of the four questions by interview mode at each study visit. To determine whether interview mode was associated with affirmative responses, we calculated response risk ratios using log-linear generalized estimating equations (GEE) univariable regression with exchangeable correlation and robust standard errors for repeated observations. For separate 12- and 24-month models, we used the question of interest as the dependent variable (0 = no, 1 = yes) and interview mode (0 = FTFI, 1 = ACASI) as the independent variable, so that each participant contributed two observations in each model.

Next, we examined agreement of individual responses (i.e., percentage answering “yes” or “no” on both interviews for a given question) and calculated kappa scores (Landis & Koch, 1977). For this analysis, we considered kappa scores >0.50 as moderately consistent, and >0.80 as highly consistent, indicating near-equivalence. We then compared kappa scores after stratifying by educational attainment (0 to 6 years, 7+ years) and age (18 to 22 years, 23 to 25 years) to examine whether either of these demographic characteristics were associated with agreement. To assess whether the order of interview administration was associated with response agreement, we also compared kappa scores from the 12- and 24-month study visits, using nonoverlapping 95% confidence intervals as a proxy for with p < .05.

In addition, we described the pattern of missing data by tabulating the proportion of participants declining to answer questions by interview mode. Finally, we tabulated participants’ feedback on the interview modes (their comfort, relative honesty, and willingness to recommend ACASI to a friend), which was gathered through a short face-to-face feedback interview—using a different interviewer than for the FTFI—at the end of the 12-month visit. Data were analyzed using Stata 10 (StataCorp, College Station, TX).

Results

Four hundred sixty-four women, constituting 92% of the 501 eligible participants, were enrolled in the ACASI study. All women were more than 18 years old, with more than half (54%) between 23 and 25 years (Table 2). More than half of participants (55%) reported having not continued beyond primary education. Participants had been married an average of 4.4 years, and few remained nulliparous (9%). A small minority (7%) reported ever having used a computer. Most (82%) participants returned for the final visit; those retained in the study did not differ from those lost to follow-up on any demographic characteristic described here (data not shown).

Table 2
Participant Demographics

Comparison of Responses Between Interview Modes: Hypotheses 1 and 2

Reports of having experienced domestic violence were lower in ACASI than in FTFI at both visits (12-month visit: 19.3% vs. 31.3%; 24-month visit: 22.6% vs. 29.8%; Table 3). Regression results provided evidence that this difference in domestic violence disclosure was statistically significant (risk ratio [RR] at 12 months = 0.61, 95% CI = 0.52, 0.73; RR at 24 months = 0.74, 95% CI = 0.62, 0.89). Differential affirmation was generally more pronounced for the less sensitive questions, and, as with reports of domestic violence, participants provided fewer affirmative response in ACASI than in FTFI. For example, women reported being significantly less likely to participate in a household financial decision in ACASI (RR at 12 months = 0.55, 95% CI = 0.49, 0.61; RR at 24 months = 0.75, 95% CI = 0.67, 0.82).

Table 3
Comparison of ACASI and FTFI: Results of GEE Regression and Kappa Scores

Agreement and Kappa Scores: Hypotheses 1 and 2

At the 12-month visit, response agreement between interview modes was 78.7% for reports of domestic violence and ranged between 47.0% and 61.7% for the less sensitive questions (Table 3). The kappa scores for domestic violence were highest but remained below 0.50 at both visits. The kappa scores for the three less sensitive questions were much lower at both visits (κ = 0.02-0.34).

Agreement and Kappa Scores: Hypothesis 3

All measures showed some improvement in agreement at the 24-month visit, though only health decision making was significant at the p = .05 level (Table 3).

Kappa Score Differences: Hypothesis 4

Stratification of kappa scores by educational attainment and age revealed some differences (Table 4). Participants with more than 7 years of education compared with those with less education showed higher response agreement for nearly all measures at each visit. For reports of domestic violence at the 12-month visit, the kappa score was significantly higher among women with more education compared with women with less education (κ = 0.64 and κ = 0.32, respectively), though this difference diminished at the 24-month visit (κ = 0.52 and κ = 0.44). Age was also positively associated with agreement; for all measures, women aged 23 to 25 years had higher kappa scores than women aged 18 to 22, though none of these improvements were significant at the p = .05 level.

Table 4
Kappa Scores by Study Visit, and Stratified by Educational Attainment and Age

Missing data

Participants answered all questions in the FTFI. All ACASI measures had some missing data, with a maximum of 1.6% of data missing for spousal communication.

ACASI Acceptability

In the feedback interview completed at the 12-month visit, almost two thirds of participants (63.4%) said they were more comfortable using ACASI, and nearly all (98.1%) reported having had enough time to practice (Table 5). A similar proportion noted that they were more honest in ACASI or FTFI (43.8% and 44.8%, respectively) with the balance saying they were equally honest. At this visit, the 166 women who reported domestic violence in either interview mode were significantly more likely to state being more honest in the FTFI, compared with the 297 women who did not report domestic violence (53.6% vs. 40.1%), and were less likely to state being more honest on ACASI (33.1% vs. 49.8%, Pearson χ2[2df] = 12.2, p < .01). Of the 98 women with discrepant responses, those who reported their violence in the FTFI only (n = 77) were more likely to report being more honest in the FTFI relative to those (n = 21) who only reported their violence in the ACASI (62.3% vs. 52.4%, Pearson χ2[2df] = 1.0, p = .61; data not shown).

Table 5
12-Month Visit Feedback Interview

Discussion

In contrast to much of the literature on ACASI, we found greater reporting of personal experiences and behaviors on the FTFI, regardless of question sensitivity. Whether this represents overreporting on the FTFI or underreporting on ACASI remains unclear due to the lack of ability to validate self-reports. Interestingly, equal proportions of participants noted that they were more honest on either the FTFI or the ACASI, though women who reported domestic violence in either mode were more likely to state they were more honest in the FTFI. The low kappa scores provide further evidence that the two modes were not equivalent in our study and suggest that analytic results may vary considerably depending on which interview mode is used.

Nonetheless, variations in response agreement by age and educational attainment, which were not correlated in this sample, provided additional evidence of the mixed performance of these two interview modes. Older women, and those with greater educational attainment, had higher response agreement. Although these characteristics may enable more consistent reporting, it is important to recognize that consistency does not necessarily indicate accuracy.

Differences in response agreement by demographic characteristics have been examined in three other studies: Whereas one did not find any associations (Palen et al., 2008), two others identified problems with reliability among those with less education (Bernabe-Ortiz et al., 2008; van de Wijgert, Padian, Shiboski, & Turner, 2000). In addition, two other ACASI studies from India point to participants’ lack of experience with technology as negatively affecting agreement (Jaya et al., 2008; NIMH Collaborative HIV/STD Prevention Trial Group, 2007). It is likely that education and technological experience are related to each other, and in fact, in this sample, participants who had used a computer had significantly more years of education than those who had never used one (7.5 vs. 5.4 years, Wilcoxon-Mann-Whitney test p < .01). Therefore, in our sample, education may serve as a proxy for computer familiarity. This familiarity—still minimal in this population—may improve disclosure in ACASI and accordingly increase response agreement with FTFI. The consistent differential in agreement levels by age and educational attainment—though largely nonsignificant—is a concern and highlights a need for further studies on the process by which study participants disclose information in various interview contexts. Improved reporting in ACASI may lie in a longer rollout of computer use to build familiarity with the technology.

It is clear from these data that FTFI and ACASI do not have equivalent performance characteristics. Though participants’ responses to domestic violence and the less sensitive questions bear a similar pattern of preferential reporting in FTFI, it is unlikely that inherent characteristics of the interview modes alone explain this pattern. The interplay of two sources of bias—social desirability and perceived therapeutic benefit of a response—may partially explain participants’ responses in each mode.

The high affirmative response on the less sensitive questions in FTFI could be evidence that these questions are prone to social desirability bias. To respond negatively to an interviewer may cause embarrassment, or the participant may respond on the basis of normative, idealized expectations of household decision making and spousal communication, leading them to not fully reflect on their own experience when faced with a question. In contrast, there is no emotional penalty for hesitating to respond on ACASI or in acknowledging limited involvement in these socially normative behaviors, leading to more accurate responses in ACASI.

The higher affirmative response to the question on domestic violence in the FTFI is unlikely to be a consequence of social desirability. In this case, greater disclosure may have been a function of perceived therapeutic benefit combined with a lack of experience with technology. Women in this study had consented to participate in research on gender-based power, with particular emphasis on marital relationships. As such, increased comfort in verbally acknowledging conflicts in marriage in the context of an FTFI may have resulted from recognition of potential personal benefits of disclosure (Koziol-McLain, Giddings, Rameka, & Fyfe, 2008; Mitchell et al., 2007; Newman et al., 2002; Spangaro, Zwi, & Poulos, 2009; Williams et al., 2000). Participants may have been more likely to disclose domestic violence to an empathic, nonjudgmental interviewer, especially when there was the prospect of referrals to community organizations offering support services to women experiencing domestic violence and to the study’s physician and counselors. This may also explain why these women were also more likely to state they were more honest in the FTFI. In addition, for the vast majority of women who were using a computer for the first time, it may not have been as clear how ACASI data would be stored, analyzed, and interpreted in comparison with information disclosed in FTFI. Indeed, other studies on domestic violence have shown that ACASI may not be the optimal mode of data collection (Renker & Tonkin, 2007) or necessarily superior to FTFI (Kim et al., 2008; MacMillan et al., 2006).

Nonetheless, results from the feedback interview show that ACASI was very well accepted. It is promising that a majority were more comfortable in ACASI than FTFI and would recommend it to a friend. Still, research is needed on enhancing the interactive computer experience (Reeves & Nass, 1996) for populations in low-income settings to further improve the reliability and validity of ACASI.

Limitations

The main limitation of this study, as is the case for most studies designed to evaluate ACASI, is the lack of a gold standard data collection method. Thus, we were unable to validate the accuracy of reports obtained through either mode. An additional limitation is the lack of randomization of the order of ACASI and FTFI administration within a visit; however, evidence from other studies of interview mode on self-reports suggests that mode order does not have a substantive influence on response patterns (Jaya et al., 2008; Kissinger et al., 1999), as we found here. This evaluation of ACASI was conducted in the context of an ongoing study, and participants had already completed a baseline FTFI. Thus, familiarity with study interviewers may have increased participants’ willingness to disclose personal information in FTFI. However, significant increases in disclosure for any of the questions were not observed in FTFI over time (baseline data not shown). As nearly all ACASI questions had yes/no response sets, starting with “No”, primacy effects could have influenced participants toward this response as a result of indifference or fatigue. Future investigators should assess response quality by repeating questions with reverse-scored response sets for internal validation and query participants about their reporting patterns in each mode via in-depth poststudy qualitative interviewing. Finally, the generalizability of our findings is limited by the fact that data were collected from a convenience sample of Tamil-fluent women in two urban low-income communities in Bangalore, India (Rocca et al., 2008).

Conclusion

This study underscores the challenge of accurate measurement in epidemiological research in a low-income country setting. The results are striking primarily for the discrepancies between the two modes that persisted over time and for the significantly higher reporting of domestic violence and other less sensitive behaviors in FTFI. Thus, ACASI may not yield greater accuracy in self-reports of domestic violence, particularly among women with limited exposure to computers. Further measurement research in this population could investigate how social desirability bias and perceived therapeutic benefit may influence reporting across different interview modes to inform research design. In the interim, well-trained interviewers may be best positioned to gather information on domestic violence through FTFIs.

Acknowledgments

The authors are grateful to the Samata Health Study research team, Rafiq Khan for data quality control, Mridula Shankar for query resolution, and Alan Hubbard for statistical consultation. The authors give special thanks to the women who participated in this study, for their willingness to share their life experiences with the authors.

Funding The author(s) disclosed that they received the following support for their research and/or authorship of this article: The National Institute of Child Health and Human Development (R01 HD041731 to SK).

Biographies

• 

Sujit D. Rathod was the data manager for the Samata Health Study from 2007-2009. He received an MSc in control of infectious disease from the London School of Hygiene and Tropical Medicine and is now a PhD student in the Division of Epidemiology at the University of California, Berkeley.

• 

Alexandra M. Minnis is an epidemiologist at the Women’s Global Health Imperative at RTI International, San Francisco and is assistant adjunct professor in the epidemiology division at the School of Public Health at the University of California, Berkeley. She received her PhD in epidemiology and her MPH from the University of California, Berkeley. Her research addresses the prevention of HIV, sexually transmitted infections (STIs), and unintended pregnancy, both in the United States and internationally. She has investigated methodological issues in conducting HIV prevention research, with a particular focus on female-controlled prevention technologies, and examined structural factors that lead to reproductive health disparities among ethnic minority and marginalized populations in the United States and Mexico. Her recent work addresses the influence of migration on reproductive health among Latino adolescents in California, for which she received a career development award from the NICHD at the National Institutes of Health.

• 

Kalyani Subbiah is the project director of the Samata Health Study in Bangalore, India and has been associated with the study since early 2004. She holds a master’s degree in public communication from Syracuse University, New York. Trained in health communication and research, she has several years of experience in collaborative research related to gender, substance abuse, sexual behavior, and reproductive health.

• 

Suneeta Krishnan is a social epidemiologist at the Women’s Global Health Imperative, RTI International and is the principal investigator of the Samata Health Study. She also holds adjunct appointments at the University of California, Berkeley and the St. John’s Research Institute, Bangalore, India. Her research aims to uncover the pathways through which social inequities lead to adverse women’s health outcomes and health disparities and to develop and test interventions to promote health and social equity. She conducts mixed methods, community-based, collaborative research in India and Tanzania on intersections between gender, sexuality, and risk of domestic violence, HIV/AIDS, and unintended pregnancy. She also engages in research and training on ethics in public health research. She is a recipient of the 2004 U.S. Presidential Early Career Award for Scientists and Engineers.

Footnotes

A preliminary version of this paper was presented at the Annual Meetings of the Society for Epidemiological Research in Anaheim, CA, June 23-26, 2009.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the authorship and/or publication of this article.

References

  • Barua A, Kurz K. Reproductive health-seeking by married adolescent girls in Maharashtra, India. Reproductive Health Matters. 2001;9:53–62. [PubMed]
  • Bernabe-Ortiz A, Curioso WH, Gonzales MA, Evangelista W, Castagnetto JM, Carcamo CP, et al. Handheld computers for self-administered sensitive data collection: A comparative study in Peru. BMC Medical Informatics and Decision Making. 2008;8:7. [PMC free article] [PubMed]
  • Edwards SL, Slattery ML, Murtaugh MA, Edwards RL, Bryner J, Pearson M, et al. Development and use of touch-screen audio computer-assisted self-interviewing in a study of American Indians. American Journal of Epidemiology. 2007;165:1336–1342. [PubMed]
  • Ellsberg M, Heise L, Pena R, Agurto S, Winkvist A. Researching domestic violence against women: Methodological and ethical considerations. Studies in Family Planning. 2001;32(1):1–16. [PubMed]
  • George A. Embodying identity through heterosexual sexuality: Newly married adolescent women in India. Culture, Health & Sexuality. 2002;4:207–222.
  • Ghanem KG, Hutton HE, Zenilman JM, Zimba R, Erbelding EJ. Audio computer assisted self interview and face to face interview modes in assessing response bias among STD clinic patients. Sexually Transmitted Infections. 2005;81:421–425. [PMC free article] [PubMed]
  • Hewett PC, Mensch BS, Ribeiro MCSdA, Jones HE, Lippman SA, Montgomery MR, et al. Using sexually transmitted infection biomarkers to validate reporting of sexual behavior within a randomized, experimental evaluation of interviewing methods. American Journal of Epidemiology. 2008;168:202–211. [PMC free article] [PubMed]
  • Jaspan HB, Flisher AJ, Myer L, Mathews C, Seebregts C, Berwick JR, et al. Brief report: Methods for collecting sexual behaviour information from South African adolescents—A comparison of paper versus personal digital assistant questionnaires. Journal of Adolescence. 2007;30:353–359. [PubMed]
  • Jaya, Hindin MJ, Ahmed S. Differences in young people’s reports of sexual behaviors according to interview methodology: A randomized trial in India. American Journal of Public Health. 2008;98:169–174. [PMC free article] [PubMed]
  • Jejeebhoy SJ. Adolescent sexual and reproductive behavior: A review of the evidence from India. Social Science and Medicine. 1998;46:1275–1290. [PubMed]
  • Jennings TE, Lucenko BA, Malow RM, Devieux JG. Audio-CASI vs. interview method of administration of an HIV/STD risk of exposure screening instrument for teenagers. International Journal of STD & AIDS. 2002;13:781–784. [PMC free article] [PubMed]
  • Johnson AM, Copas AJ, Erens B, Mandalia S, Fenton K, Korovessis C, et al. Effect of computer-assisted self-interviews on reporting of sexual HIV risk behaviours in a general population sample: A methodological experiment. AIDS. 2001;15(1):111. [PubMed]
  • Kim J, Dubowitz H, Hudson-Martin E, Lane W. Comparison of 3 data collection methods for gathering sensitive and less sensitive information. Ambulatory Pediatrics. 2008;8:255–260. [PubMed]
  • Kissinger P, Rice J, Farley T, Trim S, Jewitt K, Margavio V, et al. Application of computer-assisted interviews to sexual behavior research. American Journal of Epidemiology. 1999;149:950. [PubMed]
  • Koziol-McLain J, Giddings L, Rameka M, Fyfe E. Intimate partner violence screening and brief intervention: Experiences of women in two New Zealand health care settings. Journal of Midwifery & Women’s Health. 2008;53:504–510. [PubMed]
  • Krishnan S, Iyengar S, Pande RP, Subbiah K, Roca E, Anuradha R, et al. Marriage and motherhood: Influences on women’s power in sexual relationships; Paper presented at the Population Association of America, 2005 Annual Meeting; 2005. Retrieved September 10, 2010, from http://paa2005.princeton.edu/download.aspx?submissionId=50605.
  • Krishnan S, Rocca C, Hubbard A, Subbiah K, Edmeades J, Padian N. Do changes in spousal employment status lead to domestic violence? Insights from a prospective study in Bangalore, India. Social Science & Medicine. 2009;70:136–143. [PMC free article] [PubMed]
  • Kurth AE, Martin DP, Golden MR, Weiss NS, Heagerty PJ, Spielberg F, et al. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history. Sexually Transmitted Diseases. 2004;31:719. [PubMed]
  • Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174. [PubMed]
  • Le LC, Blum RW, Magnani R, Hewett PC, Do HM. A pilot of audio computer-assisted self-interview for youth reproductive health research in Vietnam. Journal of Adolescent Health. 2006;38:740–747. [PubMed]
  • MacMillan HL, Wathen CN, Jamieson E, Boyle M, McNutt LA, Worster A, et al. Approaches to screening for intimate partner violence in health care settings: A randomized trial. JAMA. 2006;296:530. [PubMed]
  • Mensch BS, Hewett PC, Erulkar AS. The reporting of sensitive behavior by adolescents: A methodological experiment in Kenya. Demography. 2003;40:247–268. [PubMed]
  • Mensch BS, Hewett PC, Gregory R, Helleringer S. Sexual behavior and STI/HIV status among adolescents in rural Malawi: An evaluation of the effect of interview mode on reporting. Studies in Family Planning. 2008;39:321–334. [PMC free article] [PubMed]
  • Metzger DS, Koblin B, Turner C, Navaline H, Valenti F, Holte S, et al. Randomized controlled trial of audio computer-assisted self-interviewing: Utility and acceptability in longitudinal studies. American Journal of Epidemiology. 2000;152(2):99–106. [PubMed]
  • Mitchell K, Wellings K, Elam G, Erens B, Fenton K, Johnson A. How can we facilitate reliable reporting in surveys of sexual behaviour? Evidence from qualitative research. Culture Health & Sexuality. 2007;9:519–531. [PubMed]
  • Newman JC, Jarlais D, Turner CF, Gribble J, Cooley P, Paone D. The differential effects of face-to-face and computer interview modes. American Journal of Public Health. 2002;92:294–297. [PMC free article] [PubMed]
  • NIMH Collaborative HIV/STD Prevention Trial Group. The feasibility of audio computer-assisted self-interviewing in international settings. AIDS. 2007;21(Suppl. 2):S49–S58. [PubMed]
  • NIMH Multisite HIV/STD Prevention Group. Designing an audio computer-assisted self-interview (ACASI) system in a multisite trial: A brief report. JAIDS Journal of Acquired Immune Deficiency Syndromes. 2008;49:S52. [PMC free article] [PubMed]
  • Palen L, Smith EA, Caldwell LL, Flisher AJ, Wegener L, Vergnani T. Inconsistent reports of sexual intercourse among South African high school students. Journal of Adolescent Health. 2008;42:221–227. [PMC free article] [PubMed]
  • Potdar R, Koenig MA. Does audio-CASI improve reports of risky behavior? Evidence from a randomized field trial among young urban men in India. Studies in Family Planning. 2005;36(2):107–116. [PubMed]
  • Reeves B, Nass C. The media equation: How people treat computers, television, and new media like real people and places. New York: Cambridge University Press; 1996.
  • Renker PR, Tonkin P. Postpartum women’s evaluations of an audio/video computer-assisted perinatal violence screen. CIN—Computers Informatics Nursing. 2007;25(3):139–147. [PubMed]
  • Rhodes KV, Drum M, Anliker E, Frankel RM, Howes DS, Levinson W. Lowering the threshold for discussions of domestic violence: A randomized controlled trial of computer screening. Archives of Internal Medicine. 2006;166:1107–1114. [PubMed]
  • Rhodes KV, Lauderdale DS, He T, Howes DS, Levinson W. “Between me and the computer”: Increased detection of intimate partner violence using a computer questionnaire. Annals of Emergency Medicine. 2002;40:476–484. [PubMed]
  • Rocca CH, Rathod S, Falle T, Pande RP, Krishnan S. Challenging assumptions about women’s empowerment: Social and economic resources and domestic violence among young married women in urban South India. International Journal of Epidemiology. 2008;38:577–585. [PMC free article] [PubMed]
  • Santhya KG, Haberland N, Ram F, Sinha RK, Mohanty SK. Consent and coercion: Examining unwanted sex among married young women in India. International Family Planning Perspectives. 2007;33(3):124–132. [PubMed]
  • Schroder KEE, Carey MP, Vanable PA. Methodological challenges in research on sexual risk behavior: II. Accuracy of self-reports. Annals of Behavioral Medicine. 2003;26(2):104–123. [PMC free article] [PubMed]
  • Simoes AA, Bastos FI, Moreira RI, Lynch KG, Metzger DS. A randomized trial of audio computer and in-person interview to assess HIV risk among drug and alcohol users in Rio De Janeiro, Brazil. Journal of Substance Abuse Treatment. 2006;30:237–243. [PubMed]
  • Spangaro J, Zwi A, Poulos R. The elusive search for definitive evidence on routine screening for intimate partner violence. Trauma Violence & Abuse. 2009;10(1):55. [PubMed]
  • Tourangeau R, Smith TW. Asking sensitive questions—The impact of data collection mode, question format, and question context. Public Opinion Quarterly. 1996;60:275–304.
  • Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology. Science. 1998;280:867–873. [PubMed]
  • van de Wijgert J, Padian N, Shiboski S, Turner C. Is audio computer-assisted self-interviewing a feasible method of surveying in Zimbabwe? International Journal of Epidemiology. 2000;29:885–890. [PubMed]
  • van Griensven F, Naorat S, Kilmarx PH, Jeeyapant S, Manopaiboon C, Chaikummao S, et al. Palmtop-assisted self-interviewing for the collection of sensitive behavioral data: Randomized trial with drug use urine testing. American Journal of Epidemiology. 2006;163:271–278. [PubMed]
  • Williams ML, Freeman RC, Bowen AM, Zhao Z, Elwood WN, Gordon C, et al. A comparison of the reliability of self-reported drug use and sexual behaviors using computer-assisted versus face-to-face interviewing. AIDS Education and Prevention. 2000;12:199–213. [PubMed]
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...