Format

Send to

Choose Destination
Acad Med. 2014 Dec;89(12):1687-93. doi: 10.1097/ACM.0000000000000433.

Assessing medical students' and residents' perceptions of the learning environment: exploring validity evidence for the interpretation of scores from existing tools.

Author information

1
Dr. Colbert-Getz is senior director of medical education development and assessment, University of Utah School of Medicine, Salt Lake City, Utah. Ms. Kim is a fourth-year medical student, University of Sydney Medical School, Sydney, Australia. Ms. Goode is a clinical librarian, William H. Welch Medical Library, Johns Hopkins University School of Medicine, Baltimore, Maryland. Dr. Shochet is assistant professor of medicine and director, Colleges Advisory Program, Johns Hopkins University School of Medicine, Baltimore, Maryland. Dr. Wright is professor of medicine and division chief, Division of General Internal Medicine, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Abstract

PURPOSE:

Although most agree that supportive learning environments (LEs) are essential for effective medical education, an accurate assessment of LE quality has been challenging for educators and administrators. Two previous reviews assessed LE tools used in the health professions; however, both have shortcomings. The primary goal of this systematic review was to explore the validity evidence for the interpretation of scores from LE tools.

METHOD:

The authors searched ERIC, PsycINFO, and PubMed for peer-reviewed studies that provided quantitative data on medical students' and/or residents' perceptions of the LE published through 2012 in the United States and internationally. They also searched SCOPUS and the reference lists of included studies for subsequent publications that assessed the LE tools. From each study, the authors extracted descriptive, sample, and validity evidence (content, response process, internal structure, relationship to other variables) information. They calculated a total validity evidence score for each tool.

RESULTS:

The authors identified 15 tools that assessed the LE in medical school and 13 that did so in residency. The majority of studies (17; 61%) provided some form of content validity evidence. Studies were less likely to provide evidence of internal structure, response process, and relationship to other variables.

CONCLUSIONS:

Given the limited validity evidence for scores from existing LE tools, new tools may be needed to assess medical students' and residents' perceptions of the LE. Any new tools would need robust validity evidence testing and sampling across multiple institutions with trainees at multiple levels to establish their utility.

PMID:
25054415
DOI:
10.1097/ACM.0000000000000433
[Indexed for MEDLINE]
Free full text

Supplemental Content

Full text links

Icon for Wolters Kluwer
Loading ...
Support Center