Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance
Is Version Of
Calls for blind proficiency testing in forensic science disciplines intensified following the 2009 National Academy of Sciences report and were echoed in the 2016 report by the President’s Council of Advisors on Science and Technology. Both practitioners and scholars have noted that “open” proficiency tests, in which analysts know they are being tested, allow for test-taking behavior that is not representative of behavior in routine casework. This study reports the outcomes of one laboratory’s blind quality control (BQC) program. Specifically, we describe results from approximately 2.5 years of blind cases in the latent print section (N = 376 latent prints submitted as part of 144 cases). We also used a widely available quality metrics software (LQMetrics) to explore relationships between objective print quality and case outcomes. Results revealed that nearly all BQC prints (92.0%) were of sufficient quality to enter into AFIS. When prints had a source present in AFIS, 41.7% of print searches resulted in a candidate list containing the true source. Examiners committed no false positive errors but other types of errors were more common. Average print quality was in the midpoint of the range (53.4 on a 0-to-100 scale), though prints were evenly distributed across the Good, Bad, and Ugly categories. Quality metrics were significantly associated with sufficiency determinations, examiner conclusions, and examiner accuracy. Implications for blind testing and the use of quality metrics in routine casework as well as proficiency testing are discussed.
The following article is published as Gardner, Brett O., Maddisen Neuman, and Sharon Kelley. "Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance." Forensic Science International 324 (2021): 110823. Posted with permission of CSAFE.