Format

Send to

Choose Destination
See comment in PubMed Commons below
Teach Learn Med. 2005 Summer;17(3):202-9.

Reliability and validity of checklists and global ratings by standardized students, trained raters, and faculty raters in an objective structured teaching environment.

Author information

  • 1University of Massachusetts Medical School, Community Faculty Development Center, Worcester, Massachusetts 01655, USA. Mark.Quirk@umassmed.edu

Abstract

BACKGROUND:

Objective structured teaching exercises (OSTEs) are relatively new in medical education, with few studies that have reported reliability and validity.

PURPOSE:

To systematically examine the impact of OSTE design decisions, including number of cases, choice of raters, and type of scoring systems used.

METHODS:

We examined the impact of number of cases and raters using generalizability theory. We also compared scores from standardized students (SS), faculty raters (FR) and trained graduate student raters (TR), and examined the relation between behavior checklist ratings and global perception scores.

RESULTS:

Generalizability (g) coefficients for checklist scores were higher for SSs than TRs. The g estimates based on SSs' global scores were higher than g estimates for FRs. SSs' checklist scores were higher than TRs' checklist scores, and SSs' global evaluations were higher than FRs' and TRs' global scores. TRs' relative to SSs' global perceptions correlated more highly with checklist scores.

CONCLUSIONS:

SSs provide more generalizable checklist scores than TRs. Generalizability estimates for global scores from SSs and FRs were comparable. SSs are lenient raters compared to TRs and FRs.

PMID:
16042515
DOI:
10.1207/s15328015tlm1703_2
[PubMed - indexed for MEDLINE]

LinkOut - more resources

Full Text Sources

Other Literature Sources

PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Taylor & Francis
    Loading ...
    Support Center