Format

Send to

Choose Destination
See comment in PubMed Commons below
J Pers Assess. 2002 Apr;78(2):219-74.

An examination of interrater reliability for scoring the Rorschach Comprehensive System in eight data sets.

Author information

1
Department of Psychology, University of Alaska, Anchorage 99508, USA. afgjm@uaa.alaska.edu

Abstract

In this article, we describe interrater reliability for the Comprehensive System (CS; Exner. 1993) in 8 relatively large samples, including (a) students, (b) experienced re- searchers, (c) clinicians, (d) clinicians and then researchers, (e) a composite clinical sample (i.e., a to d), and 3 samples in which randomly generated erroneous scores were substituted for (f) 10%, (g) 20%, or (h) 30% of the original responses. Across samples, 133 to 143 statistically stable CS scores had excellent reliability, with median intraclass correlations of.85, .96, .97, .95, .93, .95, .89, and .82, respectively. We also demonstrate reliability findings from this study closely match the results derived from a synthesis of prior research, CS summary scores are more reliable than scores assigned to individual responses, small samples are more likely to generate unstable and lower reliability estimates, and Meyer's (1997a) procedures for estimating response segment reliability were accurate. The CS can be scored reliably, but because scoring is the result of coder skills clinicians must conscientiously monitor their accuracy.

PMID:
12067192
DOI:
10.1207/S15327752JPA7802_03
[Indexed for MEDLINE]
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Loading ...
    Support Center