• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Empir Res Hum Res Ethics. Author manuscript; available in PMC Aug 22, 2011.
Published in final edited form as:
J Empir Res Hum Res Ethics. Sep 2010; 5(3): 1–8.
doi:  10.1525/jer.2010.5.3.1
PMCID: PMC3159151
NIHMSID: NIHMS312839

Communicating Disclosure Risk in Informed Consent Statements

Abstract

For several years, we have experimented with various ways of communicating disclosure risk and harm to respondents in order to determine how these affect their willingness to participate in surveys. These experiments, which used vignettes administered to an online panel as well as a mail survey sent to a national probability sample, have demonstrated that (a) the probability of disclosure alone has no apparent effect on people's willingness to participate in the survey described, (b) the sensitivity of the survey topic has such an effect, and (c) making explicit the possible harms that might result from disclosure also reduces willingness to participate, in both the vignette and the mail experiments. As a last study in this series, we experimented with different ways of describing disclosure risk in informed consent statements that might more plausibly be used in real surveys, again using vignettes administered to an online panel. As suggested by our earlier work, we found that the precise wording of the confidentiality assurance had little effect on respondents' stated willingness to participate in the hypothetical survey described. However, the experimental manipulations did have some effect on perceptions of the risks and benefits of participation, suggesting that they are processed by respondents. And, as we have found in our previous studies, the topic of the survey has a consistent and statistically significant effect on stated willingness to participate. We explore some implications of these findings for researchers seeking to provide adequate information to potential survey respondents without alarming them unnecessarily.

Keywords: risk, disclosure, survey research, communicating risk, informed consent, confidentiality, confidentiality assurance

For several years, we have been investigating how various ways of communicating disclosure risk and harm to respondents affect their willingness to participate in surveys. (By disclosure risk, we mean the likelihood, or probability, that someone other than the researcher would be able to link their name with their answers.) These experiments, which used vignettes administered to an online panel as well as an actual mail survey sent to a national probability sample, have demonstrated that, under circumstances resembling that of an actual survey, (a) telling respondents about the probability of disclosure has no apparent effect on their willingness to participate in the survey described, (b) the sensitivity of the survey topic does have such an effect, and (c) making explicit some possible harms that might result from disclosure also reduces willingness to participate, in both the vignette and the mail experiments (see Couper et al., 2008, 2010).

These results suggest that there is no practical reason to inform respondents about the exact likelihood of their answers being disclosed, especially since calculating this probability is complex and likely to vary from one data element to another. As a last study in this series, therefore, we decided to experiment with some alternative ways of describing disclosure risk in informed consent statements that might more plausibly be used in real surveys.

Method

Study Design

The study used a 4 (topic) × 6 (confidentiality assurance) design to create 24 vignettes. As in our earlier studies, two of the topics were sensitive (sex, money) and two were not (work, leisure time). The confidentiality statement assured confidentiality “except as required by law” (“The information you provide is confidential except as required by law”) or “to the fullest extent of the law” (“The information you provide is confidential to the fullest extent of the law”) or gave an estimated probability of disclosure (“The information you provide is confidential. Based on experience, we think the chance that someone will connect your name with your answers is less than one in a million”). The first two of these assurances are recommended for use at the University of Michigan Survey Research Center, and are regarded as roughly equivalent; however, we speculated that the second might convey a stronger confidentiality assurance than the first. The third statement represented the lowest probability of disclosure used in our earlier studies. All of them are qualified, rather than absolute, assurances of confidentiality. Half the statements contained, in addition, the following statement: “In our experience at the Survey Research Center, no one, to the best of our knowledge, has ever been harmed through a breach of confidentiality.” We reasoned that mention of this past experience might serve to reassure respondents about the care taken by the survey organization to protect the confidentiality of their answers. The confidentiality assurance factor can thus be viewed as a 3 (confidentiality statement) × 2 (mention of experience) design. Mode (face-to-face), sponsor (National Institutes of Health), length (20 minutes), and incentive ($10) were kept constant across the 24 vignettes.

The following is an illustrative vignette:

Imagine that a professional survey interviewer visits your home and says the following:

“My name is Mary Jones and I work for the Survey Research Center. We would like you to take part in a survey on sexual behavior and sexually transmitted diseases, sponsored by the National Institutes of Health. The information you provide will help shape government policy on sexually transmitted diseases.

The information you provide is confidential except as required by law. In our experience at the Survey Research Center, no one, to the best of our knowledge, has ever been harmed through a breach of confidentiality.

The interview will take 20 minutes, and you will receive $10 as a token of the researcher's appreciation.”

Respondents were randomly assigned to one of the 24 vignettes. The vignette was followed by the following question: “On a scale from 0 to 10, where 0 means you would definitely not take part and 10 means you would definitely take part, how likely is it that you would take part in this survey?”

Hypotheses

On the basis of our earlier studies, we expected little to no effect from the confidentiality statement variations, although we speculated that “to the fullest extent of the law” might prove more reassuring than “except as required by law.” We did, however, expect that topic would have its usual effects on willingness to participate.

Sample and Administration

The survey was administered to over 9,200 members of an online panel by Market Strategies Inc. (MSI). The sample was drawn from Survey Sampling International's (SSI) Survey Spot Panel. Survey Spot is an opt-in Web panel of over three million persons who have signed up online to receive survey invitations. The standard SSI sweepstakes incentive was used. Respondents were invited using the following e-mail invitation text: Each respondent was assigned a unique PIN to get access to the survey. The PIN was embedded in a clickable link within the invitation and reminder messages.

The study was conducted by MSI and was fielded in three groups. In the first group, from November 11, 2009 until November 20, 2009, a total of 35,448 e-mail invitations were sent, resulting in just 236 completed questionnaires, for a completion rate (see Callegaro and DiSogra, 2008) of 0.67%. In the second group, done at the same time as the first, SSI recruited respondents who were willing to participate in other surveys but had failed to qualify for them. They were redirected to our survey. This process produced a total of 6,965 additional respondents. The denominator (the number of redirected sample persons) for this group is unknown. Overall, these first two groups generated 7,201 completed questionnaires. A total of 9,921 eligible respondents began the survey, but 2,720 dropped out before completing it.

Because we were concerned that the second method of recruiting respondents might have resulted in an especially cooperative sample that would bias the results of our experiment, we asked SSI to recruit an additional sample of 2,000 respondents in the usual manner (i.e., a direct invitation, without redirection from other surveys). This supplementary third group was fielded from December 23, 2009 through December 24, 2009. SSI e-mailed 120,000 panelists and generated 2,005 completed questionnaires, for a completion rate of 1.67%. A total of 5,154 people started the survey. Of these, 574 dropped out and 2,575 were terminated or redirected to a different survey because we had reached the target of completed questionnaires. Together the three groups yielded 9,206 completed questionnaires.

Comparison of the samples from the three groups suggested that those recruited in the second group (sample persons redirected to our survey) had higher overall levels of willingness to participate in the hypothetical survey described in the vignette. However, there were no significant interactions with the experimental manipulations, indicating no differential susceptibility to the experimental variables. Accordingly, we combined the data from the three groups for the analyses that follow.

This set of respondents is not a probability sample of the general population, nor of the Internet-enabled population. It is a large and diverse group of volunteers. As we have noted before, our focus, to use Kish's (1987) terms, is on randomization rather than representation. We should caution against inferences beyond this set of subjects. However, our earlier experiments, conducted using a similar set of opt-in panelists from SSI, have been replicated in a mail survey of a random sample of the general population using actual rather than hypothetical experimental conditions. Thus, although we use hypothetical vignettes and an opt-in panel of volunteers, we believe that the relative effects of the experimental manipulations would hold under real-world conditions.

Of those who completed the survey, 56% were women and 44% were men; 85% were White; 5% were Hispanic; 15% were under 35, 61% were between ages 36 and 65, and 24% were over 65; 25% had a high school education or less, 40% had at least some college, and 35% were college graduates or more. The sample consists of more women than the 2009 U.S. adult population (51.5% women) and fewer Hispanics (15.8% Hispanic in the population). The sample also has more people in the 36–65 age range than the population (52%) and is better educated than the population (45% with high school education or less). Nonetheless, it is a diverse group of participants.

Questionnaire

As already noted, respondents were exposed to only one vignette. Immediately following the vignette, they were asked how likely they would be to participate (WTP) in the survey described, and why (or why not) they were likely to participate. The question was worded as follows: “On a scale from 0 to 10, where 0 means you would definitely not take part and 10 means you would definitely take part, how likely is it that you would take part in this survey?”

Following the questions about participation willingness and reasons for the decision, respondents were asked about their perceptions of the risk, benefit (to self and society), and risk-benefit ratio of participating in the survey described. These measures are based on Singer (2010) and are a reduced set of those used in Couper et al. (2008). Perceived risk was measured with a single item: “On a scale from 0 to 10, where 0 means not at all likely and 10 means very likely, how likely do you think it is that some-one other than the researcher would find out your name and address, along with your answer to the survey questions?” Personal benefit was measured using the following yes/no question: “Do you think you, yourself, would get anything good out of the survey?” Societal benefit was measured using the following item: “On a scale from 0 to 10, where 0 means not at all useful and 10 means very useful, how useful do you think the information from the survey described above will be?” Risk-benefit ratio was ascertained with the single question, “Taking it all together, do you think the risks of this research outweigh the benefits, or do you think the benefits outweigh the risks?” with two response options (risks outweigh benefits, or benefits outweigh risks). In earlier research some of these concepts, such as perceived usefulness, had been measured using several items, but for the sake of brevity we limited ourselves to single items here.

Respondents were also asked a series of questions about their general attitudes toward privacy and surveys that we had asked in our earlier studies. These are the same multi-item measures used in Couper et al. (2008), but because they did not add to the variance explained they are not included in the final models in the present paper. The full questionnaire is available from the authors.

Analysis and Results

The first step in the analysis was to test for possible interactions among the experimental manipulations. Based on our earlier work, we did not expect to find significant interactions. A fully saturated model regressing willingness to participate (WTP) on the three experimental factors (topic, confidentiality assurance, experience statement) did not yield any significant interactions (p's all > 0.22). Given this, we then moved to an examination of the main effects of the confidentiality assurance and of the experience statement, with topic and demographics as controls. This model is presented as Model 1 in Table 1.

TABLE 1
The Effect of Key Experimental Manipulations on Willingness to Participate (OLS Regression).

Topic has a significant main effect on stated willingness to participate. The more sensitive topics (sex and money) have lower levels of WTP than the less sensitive topics (work and leisure), as expected. This is consistent with our previous findings, from both the vignette-based studies and from the general population mail survey. The strong effect of survey topic on WTP serves to reassure us that respondents are indeed paying attention to at least some of the content of the vignettes. Of more interest to us is that variations in the confidentiality assurance have no significant effect on WTP (F [2, 8866] = 1.93, p = .145), nor does the addition of the statement that in the survey organization's experience no has ever been harmed through a breach of confidentiality (F [1, 8866] = 0.29, p = .59). Although some of the demographic controls (age, income, Hispanic origin, and race) are significantly associated with WTP, the overall model accounts for very little of the variation in WTP (adjusted R2 = 0.034).

The second model in Table 1 (Model 2) adds a set of measures on the perceived risk, benefit, and risk-benefit ratio of participation in the survey described in the vignette. These variables are all significantly related to stated willingness to participate, and including them in the model significantly increases the variance accounted for (adjusted R2 = 0.44). These findings are similar to those in Couper et al. (2008). Two additional variables—attitudes toward surveys and privacy concerns—were significantly associated with WTP, but did not add to the variance explained by the model in Table 1, so are not included in the final model.

Given the strong relationship of the perceptions of risks and benefits with WTP, we next explored whether these perceptions are related to the experimental manipulations. We caution about drawing any conclusions about causal effect, since the attitude measures follow the responses to the questions about WTP and reasons for WTP. Tables 2 and and33 show the four measures regressed on the experimental variables, with the same demographic variables used in Table 1 as controls (these coefficients are not shown).

TABLE 2
Perceptions of Risk and Societal Benefit Regressed on Experimental Manipulations and Demographic Controls (OLS Regression).
TABLE 3
Personal Benefit (Yes) and Risk-Benefit Ratio (Risks Outweigh Benefits) Regressed on Experimental Manipulations and Demographic Controls (Logit Coefficients).

The two models in Table 2, for perceived risk and societal benefit (both measured on 0–10 scales), are OLS regressions, while the two models in Table 3, for personal benefit and risk-benefit ratio (both measured as 0/1 indicators), are logistic regressions.

Looking at the regression coefficients for perceived risk and societal benefit in Table 2, we see that topic is significantly related to both dependent variables. Interestingly, the survey on sex appears to be associated with less perceived risk than the survey on work (p = .026), but also with less benefit for society (p < .0001). The confidentiality assurance has a significant association with perceived risk (F [2, 8863] = 24.84, p < .0001), with both the “except as required” and “fullest extent of the law” wording associated with greater perceived risk than the “less than one in a million” chance of disclosure. The mention of SRC experience is significantly associated with lower perceived risk. Neither the confidentiality assurance nor the mention of SRC experience, however, is associated with perceived benefit of the survey to society. Overall, the experimental variables, together with the demographic controls, account for very little of the variation in these two attitude measures.

Turning to the logistic regression coefficients for personal benefit and risk-benefit ratio in Table 3, we see that only topic is significantly associated with these measures. The survey on sex is viewed as least likely to be of personal benefit, while the survey on money is most likely to be so. For both the sex and money surveys, the risks are significantly more likely to be seen as outweighing the benefits than for the leisure and work surveys. Neither the confidentiality assurance nor the mention of SRC experience is associated with personal benefit or the risk-benefit ratio. Again, the experimental variables, together with the demographic controls, account for very little of the variation in these two measures.

In summary, then, the precise wording of the confidentiality assurance has little effect on respondents' stated willingness to participate in the hypothetical survey described in the vignette. Nor does adding a statement on the organization's history of assuring confidentiality appear to affect stated willingness. However, these experimental manipulations do have some effect on perceptions of the risks and benefits of participation, suggesting that they are processed by respondents. And, as we have found in our previous vignette studies—and replicated in a mail survey of the general population—the topic of the survey has a consistent and statistically significant effect on stated willingness to participate.

Discussion and Conclusions

Given our earlier findings concerning the lack of effect of variations in descriptions of objective disclosure risk on WTP, we were not really surprised to find no effect of the variations in confidentiality statements in the present study. As in our earlier experiments, the strongest effect on WTP is exerted by the survey topic, which we have concluded is a proxy for respondents' perceptions of the survey's possible harm. And, again as in our earlier experiments, perceptions of risk and benefit also are significantly related to WTP. Indeed, these perceptions account for more variation in WTP than objective indicators of risk do. However, we know very little about the determinants of these perceptions, especially perceptions of benefits. Responses to open-ended questions in a study by Porst and Von Briel (1995), as well as several of our own studies (Couper et al., 2008, 2010), suggest that people respond to surveys for both altruistic and egoistic reasons, in addition to reasons related to characteristics of the survey. Future research should pursue the clues provided by these studies.

This study suffers from several limitations. First, it is based on a non-random sample of volunteer, cooperative respondents who, by definition, are already participating in a survey. We cannot generalize to the broader population, nor can we predict what effect (if any) these statements would have on actual participation in a real survey. Second, the survey descriptions we used in the vignettes are short. Our goal was to focus respondents' attention on the key elements of the request. These introductory statements may not meet OMB or IRB requirements. They may not match the elaborated introductions interviewers provide in face-to-to-face or telephone surveys, and they do not match the lengthier cover letters often used in mail surveys. Thus, we cannot necessarily generalize our findings to longer, more detailed survey introductions—especially those elaborated over several conversational turns in interviewer-administered surveys. Additional research is badly needed not only on the effects of different informed consent statements, but also on what respondents comprehend about risks and benefits from statements varying in content and in length.

A key strength of our study lies in the experimental nature of the manipulation. With randomization to different experimental conditions, we can isolate the effects, if any, of the confidentiality statement relative to other elements of the survey request. Furthermore, because our results are in line with a field experiment not involving vignettes and based on a random sample of the general population (Couper et al., 2008, 2010), we believe that our results have some measure of external validity.

Best Practices

There are several implications of our findings for practice:

  1. Estimates of actual disclosure risk are very difficult, if not impossible, to make. The probability of statistical disclosure would have to be estimated for each survey; and the probability of legal (compelled) disclosure is even more difficult to estimate, as is the likelihood of disclosure resulting from carelessness.
  2. Given our findings from this and earlier studies, we see no ethical reason for mandating the inclusion of such estimates in informed consent statements. They are very difficult to obtain, they may not be accurate, and they do not affect willingness to participate.
  3. Any informed consent statement that avoids an absolute assurance of confidentiality would seem to be acceptable on ethical grounds. However, mentioning legal constraints as the only possible exception is inaccurate and misleading, since disclosure may occur for other reasons as well.
  4. What is essential are the precautions actually taken by survey organizations to protect research data from disclosure. These practices will probably vary with certain survey characteristics: whether the survey is cross-sectional or longitudinal; how many variables are included in the data file; how sensitive the survey topics are; whether biomarkers are included with the file, etc. A primer of best practices in this area, designed to prevent statistical disclosure, compelled legal disclosure, and disclosure as a result of carelessness is badly needed. Workshops, short courses, and the like would also be helpful.
  5. A description of precautions taken by the survey organization might be made available to potential respondents as an optional addition to the more general, shorter informed consent statement—for example, via a clickable link in a Web survey or by means of Frequently Asked Questions in a self-administered or face-to-face survey.

Research Agenda

Very little research on informed consent to surveys has focused on what respondents actually understand the risks, harms, and benefits of participation to be. Although most reputational, legal, and economic harms that might arise from participation assume a breach of confidentiality, research ethics committees often focus on other potential harms, such as emotional distress at being questioned about certain topics. Research is needed both about how well respondents understand informed consent statements and about how upset they would be by a variety of potential harms.

Research is also needed on why people participate in surveys. Cooperation with surveys has declined steadily over many decades, but attempts to explain this decline have been largely unsuccessful. Although many studies have demonstrated that concerns about privacy and confidentiality can reduce willingness to participate, assurances of confidentiality don't necessarily increase such willingness. As Singer (2010) noted, people “do not participate because disclosure risk has been reduced or because we have given them a credible confidentiality assurance”; they participate because they see some benefit, either for themselves or for society in general. Research should focus on understanding reasons for as well as against participation, and the cost-benefit decisions that sample persons make.

Educational Implications

As we have noted above, it is necessary to disseminate best practices for confidentiality protection across the research community. Because these practices are constantly changing, methods of dissemination must also be able to keep up with such changes. Alternatively, institutional arrangements such as data archives should be developed that would permit individual researchers to store their data in an environment that protects them against confidentiality breaches and disseminates them to other researchers in a form that does not permit identification of individual respondents or their answers.

Study on Survey Participation

We appreciate your cooperation. This study will provide valuable information to researchers at the University of Michigan. This survey will take about 10 minutes. Your participation is voluntary and you may skip any questions you prefer not to answer. All of your responses will be kept completely confidential. There are no risks to taking part. We hope you enjoy it.

If you have any questions or experience difficulty with the survey, you may contact us via e-mail at participation@ msisurvey.com or call toll free 1-866-674-3375. Should you have questions regarding your rights as a participant in research, please contact:

Institutional Review Board—Behavioral Sciences 540 East Liberty Street, Suite 202 Ann Arbor, MI 48104-2210 734-936-0933 ude.hcimu@sbshbri Click the “Next” button below to begin the survey.

Acknowledgments

We thank NICHD (Grant #P01 HD045753-01) for support. We thank Reg Baker for help with deploying the Web survey and John Van Hoewyk for indispensable help with analyses. The reviewers' comments on an earlier draft of this manuscript are greatly appreciated.

Biographies

• 

Eleanor Singer is Research Professor Emerita at the Survey Research Center of the Institute for Social Research at the University of Michigan. Her research focuses on motivation for survey participation and has touched on many important ethical issues in the conduct of surveys, such as informed consent, incentives, and privacy and confidentiality, and she has published widely on these topics. She is a co-author of Survey Methodology (with Robert M. Groves and others) and a co-editor of Methods for Testing and Evaluating Survey Questionnaires (with Stanley Presser and others).

• 

Mick P. Couper is Research Professor at the Survey Research Center of the Institute for Social Research at the University of Michigan, and at the Joint Program in Survey Methodology. His research focuses on issues relating to survey nonresponse and to the use of technology in survey data collection. He is the author of Designing Effective Web Surveys, co-author of Nonresponse in Household Interview Surveys (with Robert M. Groves), and Survey Methodology (with Robert M. Groves and others). Also, he is a co-editor of Computer-Assisted Survey Information Collection and Methods for Testing and Evaluating Survey Questionnaires.

References

  • Callegaro M, DiSogra C. Computing response metrics for online panels. Public Opinion Quarterly. 2008;72(5):1008–1032.
  • Couper MP, Singer E, Conrad FG, Groves RM. Risk of disclosure, perceptions of risk, and concerns about privacy and confidentiality as factors in survey participation. Journal of Official Statistics. 2008;24(2):255–275. [PMC free article] [PubMed]
  • Couper MP, Singer E, Conrad FG, Groves RM. Experimental studies of disclosure risk, disclosure harm, topic sensitivity, and survey participation. Journal of Official Statistics. 2010;26(2):287–300. [PMC free article] [PubMed]
  • Kish L. Statistical Design for Research. John Wiley; New York, NY: 1987.
  • Porst R, von Briel C. Wären Sie vielleicht bereit, sich gegebenenfalls noch einmal befragen zu lassen? Oder: Gründe für die Teilnahme an Panelbefragungen. ZUMA-Arbeitsbericht. 1995;95(4)
  • Singer E. JPSM Distinguished Lecture. College Park, MD: Mar 12, 2010. Why people respond to surveys: A cost-benefit theory of survey participation.
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...