Send to

Choose Destination
GMS Z Med Ausbild. 2014 Nov 17;31(4):Doc41. doi: 10.3205/zma000933. eCollection 2014.

Effects of a rater training on rating accuracy in a physical examination skills assessment.

Author information

Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Medizinische Klinik I, Lübeck, Deutschland.
Institut für Qualitätsentwicklung an Schulen Schleswig-Holstein, Kronshagen, Deutschland.
Universitätsklinikum Schlesweig-Holstein, Campus Kiel, Medizinische Klinik III, Kiel, Deutschland.
Universität zu Lübeck, Institut für Medizinische Biometrie und Statistik, Lübeck, Deutschland.


in English, German


The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination.


Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment.


Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade.


While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters' grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments.


physical examination skills; randomised controlled trial; rater training; rating accuracy; skills assessment

[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for German Medical Science GMS Publishing House Icon for PubMed Central
Loading ...
Support Center