Format

Send to

Choose Destination
Sci Justice. 2014 Sep;54(5):375-389. doi: 10.1016/j.scijus.2013.08.007. Epub 2013 Nov 12.

Experimental results of fingerprint comparison validity and reliability: A review and critical analysis.

Author information

1
Human Factors Consultants, 313 Ridge View Drive, Swall Meadows, CA 93514, USA. Electronic address: Ralph@humanfactorsconsultants.com.
2
Human Factors Consultants, 313 Ridge View Drive, Swall Meadows, CA 93514, USA. Electronic address: lhaber@humanfactorsconsultants.com.

Abstract

Our purpose in this article is to determine whether the results of the published experiments on the accuracy and reliability of fingerprint comparison can be generalized to fingerprint laboratory casework, and/or to document the error rate of the Analysis-Comparison-Evaluation (ACE) method. We review the existing 13 published experiments on fingerprint comparison accuracy and reliability. These studies comprise the entire corpus of experimental research published on the accuracy of fingerprint comparisons since criminal courts first admitted forensic fingerprint evidence about 120years ago. We start with the two studies by Ulery, Hicklin, Buscaglia and Roberts (2011, 2012), because they are recent, large, designed specifically to provide estimates of the accuracy and reliability of fingerprint comparisons, and to respond to the criticisms cited in the National Academy of Sciences Report (2009). Following the two Ulery et al. studies, we review and evaluate the other eleven experiments, considering problems that are unique to each. We then evaluate the 13 experiments for the problems common to all or most of them, especially with respect to the generalizability of their results to laboratory casework. Overall, we conclude that the experimental designs employed deviated from casework procedures in critical ways that preclude generalization of the results to casework. The experiments asked examiner-subjects to carry out their comparisons using different responses from those employed in casework; the experiments presented the comparisons in formats that differed from casework; the experiments enlisted highly trained examiners as experimental subjects rather than subjects drawn randomly from among all fingerprint examiners; the experiments did not use fingerprint test items known to be comparable in type and especially in difficulty to those encountered in casework; and the experiments did not require examiners to use the ACE method, nor was that method defined, controlled, or tested in these experiments. Until there is significant progress in defining and measuring the difficulty of fingerprint test materials, and until the steps to be followed in the ACE method are defined and measurable, we conclude that new experiments patterned on these existing experiments cannot inform the fingerprint profession or the courts about casework accuracy and errors.

KEYWORDS:

Accuracy; Analysis–Comparison–Evaluation (ACE) method; Error rates; Experimental results; Fingerprints; Reliability

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center