Format

Send to

Choose Destination
See comment in PubMed Commons below
Med Educ Online. 2011 Jan 14;16. doi: 10.3402/meo.v16i0.5646.

A report on the piloting of a novel computer-based medical case simulation for teaching and formative assessment of diagnostic laboratory testing.

Author information

  • 1Department of Pathology, University of Iowa Carver College of Medicine, Iowa City, 52242, USA. clarence-kreiter@uiowa.edu

Abstract

OBJECTIVES:

Insufficient attention has been given to how information from computer-based clinical case simulations is presented, collected, and scored. Research is needed on how best to design such simulations to acquire valid performance assessment data that can act as useful feedback for educational applications. This report describes a study of a new simulation format with design features aimed at improving both its formative assessment feedback and educational function.

METHODS:

Case simulation software (LabCAPS) was developed to target a highly focused and well-defined measurement goal with a response format that allowed objective scoring. Data from an eight-case computer-based performance assessment administered in a pilot study to 13 second-year medical students was analyzed using classical test theory and generalizability analysis. In addition, a similar analysis was conducted on an administration in a less controlled setting, but to a much large sample (n = 143), within a clinical course that utilized two random case subsets from a library of 18 cases.

RESULTS:

Classical test theory case-level item analysis of the pilot assessment yielded an average case discrimination of 0.37, and all eight cases were positively discriminating (range = 0.11-0.56). Classical test theory coefficient alpha and the decision study showed the eight-case performance assessment to have an observed reliability of σ = G = 0.70. The decision study further demonstrated that a G = 0.80 could be attained with approximately 3 h and 15 min of testing. The less-controlled educational application within a large medical class produced a somewhat lower reliability for eight cases (G = 0.53). Students gave high ratings to the logic of the simulation interface, its educational value, and to the fidelity of the tasks.

CONCLUSIONS:

LabCAPS software shows the potential to provide formative assessment of medical students' skill at diagnostic test ordering and to provide valid feedback to learners. The perceived fidelity of the performance tasks and the statistical reliability findings support the validity of using the automated scores for formative assessment and learning. LabCAPS cases appear well designed for use as a scored assignment, for stimulating discussions in small group educational settings, for self-assessment, and for independent learning. Extension of the more highly controlled pilot assessment study with a larger sample will be needed to confirm its reliability in other assessment applications.

KEYWORDS:

clinical skills assessment; computer-based simulation; formative assessment; laboratory medicine; performance assessment

[PubMed - indexed for MEDLINE]
Free PMC Article
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for PubMed Central
    Loading ...
    Write to the Help Desk