Format

Send to

Choose Destination
See comment in PubMed Commons below
Acad Med. 2010 Sep;85(9):1453-61. doi: 10.1097/ACM.0b013e3181eac3e6.

Constructing a validity argument for the mini-Clinical Evaluation Exercise: a review of the research.

Author information

1
American Board of Medical Specialties, Chicago, Illinois 60601, USA. rhawkins@abms.org

Abstract

PURPOSE:

The mini-Clinical Evaluation Exercise (mCEX) is increasingly being used to assess the clinical skills of medical trainees. Existing mCEX research has typically focused on isolated aspects of the instrument's reliability and validity. A more thorough validity analysis is necessary to inform use of the mCEX, particularly in light of increased interest in high-stakes applications of the methodology.

METHOD:

Kane's (2006) validity framework, in which a structured argument is developed to support the intended interpretation(s) of assessment results, was used to evaluate mCEX research published from 1995 to 2009. In this framework, evidence to support the argument is divided into four components (scoring, generalization, extrapolation, and interpretation/decision), each of which relates to different features of the assessment or resulting scores. The strength and limitations of the reviewed research were identified in relation to these components, and the findings were synthesized to highlight overall strengths and weaknesses of existing mCEX research.

RESULTS:

The scoring component yielded the most concerns relating to the validity of mCEX score interpretations. More research is needed to determine whether scoring-related issues, such as leniency error and high interitem correlations, limit the utility of the mCEX for providing feedback to trainees. Evidence within the generalization and extrapolation components is generally supportive of the validity of mCEX score interpretations.

CONCLUSIONS:

Careful evaluation of the circumstances of mCEX assessment will help to improve the quality of the resulting information. Future research should address issues of rater selection, training, and monitoring which can impact rating accuracy.

PMID:
20736673
DOI:
10.1097/ACM.0b013e3181eac3e6
[Indexed for MEDLINE]
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Wolters Kluwer
    Loading ...
    Support Center