Send to

Choose Destination
See comment in PubMed Commons below
BMC Med Educ. 2018 Jan 5;18(1):9. doi: 10.1186/s12909-017-1116-8.

Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias.

Author information

Lebanese American University School of Medicine, Byblos, Lebanon.
Lebanese American University Medical Center - Rizk Hospital, May Zahhar Street, Ashrafieh, P.O. Box: 11-3288, Beirut, Lebanon.
Lebanese American University School of Pharmacy, Byblos, Lebanon.
Lebanese American University School of Medicine, Byblos, Lebanon.
Department of Medical Education, College of Medicine, University of Illinois at Chicago, Chicago, IL, USA.



Students evaluations of their learning experiences can provide a useful source of information about clerkship effectiveness in undergraduate medical education. However, low response rates in clerkship evaluation surveys remain an important limitation. This study examined the impact of increasing response rates using a compulsory approach on validity evidence.


Data included 192 responses obtained voluntarily from 49 third-year students in 2014-2015, and 171 responses obtained compulsorily from 49 students in the first six months of the consecutive year at one medical school in Lebanon. Evidence supporting internal structure and response process validity was compared between the two administration modalities. The authors also tested for potential bias introduced by the use of the compulsory approach by examining students' responses to a sham item that was added to the last survey administration.


Response rates increased from 56% in the voluntary group to 100% in the compulsory group (P < 0.001). Students in both groups provided comparable clerkship rating except for one clerkship that received higher rating in the voluntary group (P = 0.02). Respondents in the voluntary group had higher academic performance compared to the compulsory group but this difference diminished when whole class grades were compared. Reliability of ratings was adequately high and comparable between the two consecutive years. Testing for non-response bias in the voluntary group showed that females were more frequent responders in two clerkships. Testing for authority-induced bias revealed that students might complete the evaluation randomly without attention to content.


While increasing response rates is often a policy requirement aimed to improve the credibility of ratings, using authority to enforce responses may not increase reliability and can raise concerns over the meaningfulness of the evaluation. Administrators are urged to consider not only response rates, but also representativeness and quality of responses in administering evaluation surveys.


Authority; Bias; Clerkship; Evaluation; Rating

PubMed Commons home

PubMed Commons

How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for BioMed Central Icon for PubMed Central
    Loading ...
    Support Center