• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of hsresearchLink to Publisher's site
Health Serv Res. Jun 2007; 42(3 Pt 1): 1219–1234.
PMCID: PMC1955260

Mixing Web and Mail Methods in a Survey of Physicians

Abstract

Objective

To assess the effects of two different mixed-mode (mail and web survey) combinations on response rates, response times, and nonresponse bias in a sample of primary care and specialty internal medicine physicians.

Data Sources/Study Setting

Primary data were collected from 500 physicians with an appointment in the Mayo Clinic Department of Medicine (DOM) between February and March 2005.

Study Design

Physicians were randomly assigned to receive either an initial mailed survey evaluating the Electronic Medical Record (EMR) with a web survey follow-up to nonrespondents or its converse—an initial web survey followed by a mailed survey to nonrespondents. Response rates for each condition were calculated using standard formula. Response times were determined as well. Nonresponse bias was measured by comparing selected characteristics of survey respondents to similar characteristics in the full sample frame. In addition, the distributions of results on key outcome variables were compared overall and by data collection condition and phase.

Principal Findings

Overall response rates were somewhat higher in the mail/web condition (70.5 percent) than in the web/mail condition (62.9 percent); differences were more pronounced before the mode switch prior to the mailing to nonrespondents. Median response time was 2 days faster in the web/mail condition than in the mail/web (median=5 and 7 days, respectively) but there was evidence of under-representation of specialist physicians and those who used the EMR a half a day or less each day in the web/mail condition before introduction of the mailed component. This did not translate into significant inconsistencies or differences in the distributions of key outcome variables, however.

Conclusions

A methodology that uses an initial mailing of a self-administered form followed by a web survey to nonrespondents provides slightly higher response rates and a more representative sample than one that starts with web and ends with a mailed survey. However, if the length of the data collection period is limited and rapid response is important, perhaps the web survey followed by a mailed questionnaire is to be preferred. Key outcome variables appear to be unaffected by the data collection method.

Keywords: Response rate, response bias, mixed mode, methods, physician survey

Investigating the attitudes, beliefs, behaviors, and concerns of physicians via survey is vitally important given their role in the provision of health care and shaping the health care system. However, response rates to surveys of physicians have been found to be about 10 percentage points lower than surveys of nonphysicians (Asch, Jedrziewski, and Christakis 1997). Moreover, and as is the case with surveys of their patient and general population counterparts, there is evidence that response rates to physician surveys may be declining. In a review of response rates to mailed physician surveys from 1986 to 1995, Cummings, Savitz, and Konrad (2001) found rates to be rather constant. More recent investigations, however, point to a potential downward canting of response. Looking at response rates to surveys of pediatricians between 1994 and 2002, Cull et al. (2005) observed that the response rates to the 50 surveys they examined declined significantly during that timeframe, from an average response rate of 70 percent observed before 1998 and earlier to an average rate of 63 percent since then. In order to improve (or at least maintain) response rates to physician surveys and ensure that the perspectives of physicians who respond are representative of all physicians invited to participate, it is incumbent on health survey researchers to use the best methods for achieving that goal. Such is the focus of the investigation described herein.

To increase response rates, household surveys often turn to mixed-mode designs whereby instruments are designed to be administered via mail, web, telephone, and/or in-person and respondents are allowed to respond to the form most appropriate for them (de Leeuw 2005). The application of mixed-mode designs to physician surveys seems natural given that low response rates to single-mode physician surveys are common (about 52– 54 percent for large surveys, on average; see Asch, Jedrziewski, and Christakis 1997; Cummings, Savitz, and Konrad 2001). Additionally, some have demonstrated that selecting survey techniques that work well on physicians with differing characteristics (such as specialty and metropolitan residence) is important (Moore and Tarnai 2002). A review of the relevant literature suggests that one particular mode combination, mail and web, might prove useful in increasing response rates to physician surveys.

First, physicians, unlike their general population and patient counterparts, generate response rates to mailed surveys that are equal to those produced by personal or telephone interviews (Kellerman and Herold 2001). This finding, coupled with the large cost savings associated with mailed physician surveys vis-à-vis interviews (Shosteck and Fairweather 1979), underscores the attractiveness of a mailed survey as a method of collecting survey data from physicians. Second, while web (e-mail) surveys have a number of advantages to mailed surveys such as even lower cost, ability to capture data in an electronic format, speed of response, and data quality (Schleyer and Forrest 2000; Braithwaite et al. 2003; Akl et al. 2005), response rates to web surveys can be lower than those of mailed surveys (McMahon et al. 2003; Losch, Thompson, and Lutz 2004; Akl et al. 2005). It is plausible that combining the two modes might allow the strengths of one to offset the limitations of the other.

Finally, there is recent evidence that combining web and mailed surveys may enhance the coverage of the survey to a broader mix of physicians because the profile of providers responding electronically can be somewhat different from those responding to a mail survey. Losch, Thompson, and Lutz (2004) found that when given a choice to respond to a survey about colorectal cancer screening by web or mail, primary care physicians in general internal medicine and male physicians were more likely to respond to the web version than their family practitioner, OB/GYN, and female counterparts. Again, this suggests that the web and mail mode combination might enhance the representation of one's responding sample by allowing different types of physicians to respond to their preferred data collection method.

The current investigation is a response to specific requests to test web and mail mixed-mode designs in the context of a physician survey (VanGeest and Johnson 2001; McMahon et al. 2003; Cull et al. 2005) by comparing two different mixed-mode designs representing two combinations of web and mail surveys. To better understand which ordering of web and mail mode is most effective in producing the highest response rate, fastest rate of return, and least nonresponse bias (an often overlooked measure of data quality), we conducted an experiment where physicians were randomly assigned to receive either an initial mailed survey with a web survey follow-up to nonrespondents, or its converse—an initial web survey followed by a mailed survey to nonrespondents.

METHODS

Study Participants

A total of 500 physicians with an appointment in the Mayo Clinic Department of Medicine (DOM) were randomly selected to take part in the confidential survey. A wide range of primary care physicians and specialists (allergists, cardiologists, gastroenterologists, hematologists, nephrologists, pulmonologists, and rheumatologists) from the 12 different divisions within the DOM were invited to participate. The Mayo Clinic Institutional Review Board approved the study.

Survey Questionnaire

The instrument utilized in this investigation was designed to elicit opinions of DOM members about the Mayo Clinic Electronic Medical Record (EMR). It should be noted here that, in addition to the reasons for conducting this research listed in the introduction, a practical issue relating to the content of the survey played a role in selecting it as a platform onto which the experiment was overlaid. It was understood that some physicians in the clinic either did not access their e-mail at all or had an assistant screen incoming e-mail and forward only those messages deemed important. As such, sole use of a web-based survey, which relies on the e-mail system for invitation and distribution, would have the potential of introducing a “nonignorable response bias” (Groves et al. 2004) because those not contacted for participation (and therefore not included in the survey) would likely represent a group of physicians who hold specific opinions on technology (hence their unwillingness to access e-mail at all or indirectly) that might also be related to ratings of the EMR. To overcome this potential bias, as well as one potentially introduced by including only a mailed survey (some physicians may have their regular mail screened as well), we decided a mixed-mode design to be an appropriate method to increase coverage.

The instrument itself contained approximately 20 Likert-style items measuring general comfort using computers and various aspects of the EMR including level of use, adequacy of training, comfort level, helpfulness, satisfaction, and preference over paper medical record. Physicians were also asked to rate their levels of agreement or disagreement with various statements about the EMR such as its effect on clinical practice and the manner in which it was developed. Finally, physicians were asked to self-report their division affiliation, years of experience at Mayo Clinic, age, and open-ended suggestions for improvements to the EMR. To minimize pure mode effects (viz. reduce the chances of responses differing by survey mode because of visual appearance of the two methods; see Dillman 2000), the web survey design and layout was made to be as comparable to the paper version as possible.

The survey instrument was pretested during a meeting of DOM division and department leaders (n =55) before data collection. Each attendee was asked to complete the survey and participate in a group discussion regarding the face and content validity of the instrument, potential item ambiguities, and omitted domains. The survey was modified to reflect the changes suggested by these DOM leadership members.

Survey Administration

The survey was conducted between February and March 2005. In both conditions (described below), a multiple-contact data collection protocol was implemented consisting of the following steps: 1 week before the initial survey mailing, an e-mail from the DOM chair was sent to all DOM physicians alerting them to the upcoming survey and encouraging response; an initial survey with a cover letter/e-mail message explaining the study; a reminder either thanking them for their response if they completed the survey or an exhortation to respond was sent 1 week after the initial mailing; and a second survey 2 weeks after the reminder, again with cover letter/e-mail message, to nonrespondents to the previous surveys.

For the experiment, physicians were randomly assigned to receive either a mailed survey or a web survey as the initial survey. The former was sent to physicians' offices via inter-office mail and the latter was distributed in the form of an e-mail message to the physician with an embedded link to the web survey. The reminder was sent in the medium corresponding to the initial mailing (electronically or via inter-office mail). For the web condition, the reminder did not contain an embedded link to the web survey for comparability to its mailed counterpart. For those not responding to the initial mailing, the medium in which the follow-up survey was sent was switched. Specifically, those nonrespondents in the web first condition received their follow-up survey via mail; those in the mail first condition received the follow-up via e-mail. One week before the close of data collection, another e-mail message was sent from the DOM chair to all DOM members encouraging them to respond to the survey. Figure 1 provides a flowchart of the study sample, data collection process, and random assignment.

Figure 1
Study Sample, Data Collection Process, and Random Assignment

Statistical Analysis

Key research questions to be addressed in the analysis included: Can mixing mail and web methods increase response rates among physicians? Which of the two combinations requires the least amount of time to respond? How does mixing modes impact the participation of different types of physicians at different points in the data collection process? Which combination yields the least nonresponse bias (both in terms of sociodemographic composition and distributions of key outcome variables)? All analyses were performed using SAS v. 8.2 software (SAS Institute Inc., Cary, NC). A p-value of ≤.05 was regarded as statistically significant.

Response rates were calculated as the number of completions divided by the number of eligible cases consistent with the American Association for Public Opinion Research (AAPOR) guidelines (RR1; http://www.aapor.org). Each completed survey, regardless of format (web versus mail), was time stamped as they arrived at the Mayo Clinic Survey Research Center. Response time was calculated as the number of days between the distribution of the survey (see Figure 1 for dates) and the time of survey receipt. The Wilcoxon rank sum test was used to compare the time to response for the responders.

Response rates and response times are just two forms of survey quality and performance. The true indicator of the inferential value of a survey is the absence of nonresponse bias. However, recent literature reviews have highlighted the fact that few methodological investigations involving the conduct of physician surveys perform systematic comparisons between respondents and nonrespondents (Asch, Jedrziewski, and Christakis 1997; Cummings, Savitz, and Konrad 2001). To assess unit nonresponse bias in the present study, we compared the distributions of selected characteristics of respondents (age, gender, tenure, and specialty status) with the distributions of similar characteristics in the full sample frame based on administrative data. The comparisons were made for each experimental condition (web/mail and mail/web) as well as for each phase of data collection (Before Reminder, Before Mode Switch, and End of Data Collection).

For the nonresponse bias analysis of specialty status, physicians whose primary appointments were in allergic diseases, cardiovascular diseases, endocrinology, gastroenterology and hepatology, hematology, infectious diseases, nephrology and hypertension, pulmonary and critical care medicine, and rheumatology were coded as specialists and those with appointments in general internal medicine, preventive and occupational medicine, and primary care internal medicine were coded as primary care physicians (nonspecialists).

Finally, because similarity or differences between respondents and nonrespondents on a limited set of sociodemographic characteristics does not necessarily translate to comparable similarities or differences in responses (Groves et al. 2004; Montori et al. 2005), we assess whether there is inconsistency in the responses to key outcome variables between the two experimental conditions and across data collection phases within condition. Unlike the above nonresponse bias analysis where both respondents and nonrespondents are included in the overall distribution, for this analysis, the overall distribution we use for comparison includes only survey respondents. We focus on time spent on the EMR each day, the level of comfort using computers in general and with the EMR in particular, and overall satisfaction with the EMR. For purposes of analysis, the original 5-point Likert-style variables were dichotomized in the following manner: time spent on the EMR each day was recoded into “Less than or equal to half a day” and “More than half a day”; comfort was recoded into “Very/Somewhat Comfortable” and “Other”; and satisfaction was recoded into “Very/Somewhat Satisfied” and “Other.”

For all of the nonresponse bias analyses, estimates from each experimental group and data collection phase were compared with the overall estimate using one sample z-tests for proportions. Differences between conditions at the end of data collection were tested by performing χ2 analyses.

RESULTS

A total of 11 cases were removed from the sample due to ineligibility or duplicate listing (see Figure 1). Out of the 489 mailings, a total of 326 surveys were returned completed for an overall response rate of 66.7 percent. A total of 154 cases in the web/mail condition and 172 cases in the mail/web condition were available for analysis, overall.

Response Rates

The response rates for each condition by data collection phase are reported in Table 1. The response rates before the reminder was sent were roughly equal (38.0 and 36.9 percent for the web/mail and mail/web groups, respectively, p =.81). Response rates increased considerably through the use of a reminder with an increase of 9 percentage points observed in the web/mail condition and a 20 percentage points increase in the mail/web condition. Before the mode switch, the web/mail response rate was 46.9 percent whereas the mail/web response rate was 57.4 percent (p =.02). At the end of data collection, the differences in overall response rates for the two approaches bordered on significance with a rate of 62.9 percent observed in the web/mail condition and 70.5 percent in the mail/web (p =.07). Interestingly, eight of the 32 completions that came in after the mode switch in the mail/web condition were mailed in. In other words, in response to the mailing of the web survey to those who had received, but did not respond to, an initial mailed survey and reminder, about a quarter of the respondents sent in the mailed survey rather than complete the online version. As a comparison, all of the 39 surveys that came in after the mode switch in the web/mail condition responded via the mailed survey.

Table 1
Response Rates by Data Collection Condition (Web/Mail versus Mail/Web) and Data Collection Phase

Response Time

Casting the information presented in Table 1 somewhat more finely so that one can see exactly when the aforementioned response differentials emerged, Figure 2 shows response to the web surveys after the initial mailing was quicker than that of their mailed counterparts, likely due to the amount of time it took the latter to make it through the interoffice postal system. The figure also shows that after the reminder was sent, responses to the web survey in the web/mail condition flattened after an initial increase just after the sending of the reminder. In the mail/web condition, conversely, responses continued to accrue up to—and somewhat beyond—the mode switch. Towards the end of the field period, the rates of response appeared to converge. In the aggregate, the median response time was 2 days faster in the web/mail condition than in the mail/web condition (median=5 and 7 days, respectively, p =.06).

Figure 2
Cumulative Response by Date and Experimental Condition

Nonresponse Bias

To test for the presence of nonresponse bias in each experimental condition, we compared the age, gender, tenure (years working at Mayo Clinic), and specialty status (specialist versus primary care) of respondents to those characteristics of both respondents and nonrespondents in the full target samples at each point in the data collection process. As can be seen in the uppermost panel in Table 2, no bias was present for age, gender, and tenure but there was evidence suggestive of bias for specialty status. At the end of data collection, specialists were under-represented in the sample by 4 or 5 percent, depending on experimental condition. Looking at the earlier phases of data collection, it appears that it was the introduction of the mailed survey to those in the web/mail condition that brought the percentage of responding specialists from around 64 percent, closer to the full sample of 74.2 percent. Stated differently, whereas specialists were significantly (p ≤.05) under-represented relative to the full sample by about 10 percentage points in the two data collection phases prior the mode switch from web to mail, they were under-represented by only 5 percent at the end of data collection—a halving of the bias, in essence. For physicians assigned to the mail/web condition, no significant under-representation of specialists was observed at any point in the data collection process.

Table 2
Distributions of Selected Variables Overall and by Data Collection Condition and Data Collection Phase

The lower panel in Table 2 shows that physicians who spent more than half a day on the EMR each day were significantly over-represented in the two web-based data collection phases before the mode switch in the web/mail condition relative to what would be expected given the overall response distribution (about 17 percent and 11 percent, respectively). The responses to the comfort and satisfaction items were similar across data collection phases and conditions.

In separate analyses, no significant differences for any of the variables depicted in Table 2 were observed at the end of data collection between the two conditions.

DISCUSSION

The current study found that at the end of data collection, physicians randomly assigned to receive an initial mailed survey with a web survey follow-up to nonrespondents generated response rates that were higher than those assigned to receive an initial web survey followed by a mailed survey to nonrespondents, 71 percent versus 62 percent, although the difference only approached statistical significance (p =.07). What did attain statistical significance was the response rate differential observed between two conditions at the point of the follow-up reminder. At that point, the cumulative response rate in the web/mail condition jumped from 38 percent to 47 percent and from 37 percent to 57 percent in the mail/web condition; a rate increase of 9 percentage points and 20 percentage points in the web/mail and mail/web conditions, respectively. Although the observed jumps in response rate after mailing of the reminder comports with previous research that shows substantial increases in response rates to physician surveys as a result of follow-up (Moore and Tarnai 2002; Braithwaite et al. 2003; McMahon et al. 2003), we would posit that some of the differential effect of the reminder was due, in part, to the absence of an embedded link to the web survey in the e-mailed reminder.

Consistent with previous research (McMahon et al. 2003; Akl et al. 2005) the web/mail condition where the web survey was sent first provided the most rapid response. Again, we believe the response times to the web survey may have been even lower had we embedded a link to the web survey into the e-mail reminder. In the future, we intend to do so. Nonetheless, the results also indicate that if we had relied solely on a web data collection method, we would have potentially under-represented specialists in the responding sample had we not followed up with the mailed survey. We also have suggestive evidence indicating that physicians retain mailed surveys longer than they do e-mailed surveys. Specifically, we found that approximately a quarter of the physicians assigned to the mail/web condition actually sent in a mailed survey after the web survey was sent. Physicians may delete their e-mail messages right away but hold on to their mail a bit longer.

Out of the variables investigated, there was an indication that either condition yielded nontrivial levels of nonresponse bias but that the bias associated with specialty status and time spent each day on the EMR was the most acute in the web/mail condition. In this condition, if the mail survey had not been introduced, the sample would have under-represented specialists and those who spent less than half a day each day on the EMR. This result offers support for the use of a mixed-mode approach if a web survey is to be introduced first. However, we also see few differences in the distributions of key outcome variables regardless of data collection condition or phase. In this instance, multiple contacts and the switching of modes did not translate into large inconsistencies in the responses to key questions or the overall study's substantive conclusions.

The lack of substantial and systematic bias in the results may be due to the use of physicians. As some authors have speculated, nonresponse bias may be of less concern in physician surveys than in surveys of the general population because the population of physicians tends to be rather homogeneous regarding knowledge, training, attitudes, and behavior (Kellerman and Harold 2001). Nonetheless, the nonresponse bias analyses undertaken in the current study was informative and represents the type of evaluation shown by some (e.g., Asch, Jedrziewski, and Christakis 1997; Cummings, Savitz, and Konrad 2001) to be lacking in survey methodological investigations of this nature. Future researchers are encouraged to follow suit.

In considering these findings, it is important to note certain special features of the current investigation that may render their generalizability unclear. First, both of the study's mixed-mode approaches generated response rates substantially higher than the reported averages of 52 percent (Cummings, Savitz, and Konrad 2001) and 54 percent (Asch, Jedrziewski, and Christakis 1997). It may be that Mayo Clinic physicians have a greater propensity to respond to surveys than is typical of physicians at other institutions. Physicians in the DOM are engaged in clinical care research and education and represent a blend of practice and academic personalities. Moreover, the topic of the survey (the EMR) represents an area of great interest to physicians. The reader is also encouraged to recall that the chair of the DOM sent out two separate messages at the beginning and end of data collection encouraging response. Second, we were fortunate to have very good contact information on all physicians in the sample: no surveys were sent back indicating address errors, electronic or otherwise. Moreover, all eligible physicians could be reachable by e-mail and by interoffice mail. Samples that rely on compromised information of either type would be more constrained in their ability to fully enlist the mixed methods described herein. Finally, the design of the questionnaire utilized in the current study was relatively straightforward and lent itself to reproduction in web form. More complicated surveys may not be as mutable and as such, may introduce substantial measurement error if the two forms (mail and web) are not made to be comparable in terms of visual appearance and flow (Dillman 2000). In light of these features, generalizations of these study findings to other physician samples and data collection contexts should be made with caution.

In conclusion, the results of this study offer partial support for the use of a mixed-mode approach to surveying physicians. The overall response rate and sample representation were slightly better using a methodology that uses an initial mailing of a self-administered form followed by a web survey to nonrespondents. If the length of the data collection period is limited and rapid response is important, a web survey followed by a mailed questionnaire might be preferred but a specialist bias may be introduced. Although this study provides novel information from a randomized test of different data collection methodologies, clearly, more work in this area is needed, particularly among more heterogeneous samples of physicians.

The importance of surveying physicians will not diminish in the foreseeable future, even though there is evidence that doing so is proving increasingly difficult. We must continue to respond to the calls of Cull et al. (2005), Kellerman and Harold (2001), McMahon et al. (2003), and VanGeest and Johnson (2001) by continuing this line of inquiry and attempt to find the optimal approach to survey physicians. Otherwise, the physician perspective may not be adequately represented in debates and issues germane to the practice of medicine or to the realm of health policy.

Acknowledgments

The project described was supported by internal Mayo Foundation funds. The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of the Mayo Clinic. The authors express their gratitude to Dr. Nicholas LaRusso, Dr. Douglas Wood, and Ms. Barbara Spurrier for their support of this project and Ms. Gail Ludens for her assistance with the questionnaire pretesting component of this study. We especially appreciate the dedicated work of the Mayo Clinic Survey Research Center staff and the contributions of all the members of the Mayo Clinic Department of Medicine who participated in the study.

REFERENCES

  • Akl EA, Maroun N, Klocke RA, Montori V, Schunemann HJ. Electronic Mail Was Not Better Than Postal Mail for Surveying Residents and Faculty. Journal of Clinical Epidemiology. 2005;58:425–9. [PubMed]
  • Asch DA, Jedrziewski MK, Christakis NA. Response Rates to Mail Surveys Published in Medical Journals. Journal of Clinical Epidemiology. 1997;50:1129–36. [PubMed]
  • Braithwaite D, Emery J, de Lusignan S, Sutton S. Using the Internet to Conduct Surveys of Health Professionals: A Valid Alternative? Family Practice. 2003;20:545–51. [PubMed]
  • Cull WL, O'Connor KG, Sharp S, Tang SS. Response Rates and Response Bias for 50 Surveys of Pediatricians. Health Services Research. 2005;40:213–26. [PMC free article] [PubMed]
  • Cummings SM, Savitz LA, Konrad TR. Reported Response Rates to Mailed Physician Questionnaires. Health Services Research. 2001;35:1347–55. [PMC free article] [PubMed]
  • de Leeuw ED. To Mix or Not to Mix Data Collection Modes in Surveys. Journal of Official Statistics. 2005;21:233–55.
  • Dillman DA. Mail and Internet Surveys: The Tailored Design Method. New York: John Wiley & Sons, Inc.; 2000.
  • Groves RM, Fowler FJ, Couper MP, Lepkowski JM, Singer E, Tourangeau R. Survey Methodology. Wiley Series in Survey Methodology. New York: John Wiley & Sons, Inc; 2004.
  • Kellerman SE, Herold J. Physician Response to Surveys: A Review of the Literature. American Journal of Preventive Medicine. 2001;20:61–7. [PubMed]
  • Losch ME, Thompson N, Lutz G. 2004 “The Effect of Mode on Response Rates and Data Quality in a Survey of Physicians.” Paper presented at the 59th Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.
  • McMahon SR, Iwamoto M, Massoudi MS, Yusuf HR, Stevenson JM, David F, Chu SY, Pickering LK. Comparison of E-mail, Fax, and Postal Surveys of Physicians. Pediatrics. 2003;111:299–303. [PubMed]
  • Montori VM, Leung TW, Walter SD, Guyatt GH. Procedures That Assess Inconsistency in Meta-Analyses Can Assess the Likelihood of Response Bias in Multiwave Surveys. Journal of Clinical Epidemiology. 2005;58:856–8. [PubMed]
  • Moore DL, Tarnai J. Evaluating Nonresponse Error in Mail Surveys. In: Groves RM, Dillman DA, Eltinge JL, Little RJA, editors. Survey Nonresponse. New York: John Wiley & Sons, Inc.; 2002. pp. 197–211.
  • Schleyer TKL, Forrest JL. Methods for the Design and Administration of Web-Based Surveys. Journal of the American Informatics Association. 2000;7:416–25. [PMC free article] [PubMed]
  • Shosteck H, Fairweather WR. Physician Response Rates to Mail and Personal Interview Surveys. Public Opinion Quarterly. 1979;43:206–17. [PubMed]
  • VanGeest JB, Johnson T. 2001 “Methodologies for Improving Response Rates in Mail Surveys of Physicians: A Review.” Paper presented at the 129th Annual Meeting of the American Public Health Association, Atlanta, GA.

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...