Format

Send to

Choose Destination
Teach Learn Med. 2016 Oct-Dec;28(4):385-394. Epub 2016 Jun 10.

Direct Observation of Clinical Skills Feedback Scale: Development and Validity Evidence.

Author information

1
a Department of Medicine , University of Ottawa , Ottawa , Ontario , Canada.
2
b Department of Innovation in Medical Education , University of Ottawa , Ottawa , Ontario , Canada.
3
c The Medical Council of Canada , Ottawa , Ontario , Canada.
4
d The Centre for Medical Education, University of Dundee , Dundee , Scotland , UK.

Abstract

Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace.

BACKGROUND:

Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use.

APPROACH:

Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale.

RESULTS:

Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5).

CONCLUSIONS:

The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.

KEYWORDS:

Feedback; assessment; scale development

PMID:
27285377
DOI:
10.1080/10401334.2016.1186552
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Taylor & Francis
Loading ...
Support Center