Managing Latent-Print Errors
Written by Alice Maceo   

Miss the photos and figures?
View, read, share, save, and print this article
as it appeared in the print edition now, online!

The Friction Ridge:
Managing Latent-Print Errors

I BECAME A MANAGER of a latent-print unit in 2006. For those forensic disciplines that rely on humans as the analytical instrument,


Advertisement

management can be very daunting. I once related the experience to our chemistry supervisor in this way: “Imagine that I tweaked the sensitivity of each of your GC-MSs (gas chromatography-mass spectrometers) to a different setting… then adjusted those sensitivities randomly throughout the day on each instrument… and then asked you to run a complex sample through two instruments and come up with the same answer.” The supervisor just shook her head.

In spite of the inherent difficulties involved with managing a latent-print unit, there are steps that can be taken to identify, address, and reduce technical error. A culture of accuracy and thoroughness is the first step in the process. If the analysts know that the quality-assurance process is designed to ensure the most accurate results and is not punitive, it allows the analysts to operate without fear of repercussion or becoming paralyzed, unable to render conclusions.

The second step is setting up clear verification procedures. Based on conversations with many analysts around the country, most agencies verify identifications. Interestingly, the analysts also indicated that the most frequent technical error is a “false negative” (a.k.a. “erroneous exclusion”). However, many agencies do not verify “negative”, “not identified”, “exclusion”, or “inconclusive” results. It is impossible to manage technical errors if not all of the conclusions are reviewed. It is impossible to learn from mistakes if the mistakes are not unearthed.

The most frequently cited reason for not reviewing all conclusions is a shortage of manpower. It has been my experience that reviewing all conclusions in all cases takes approximately 25% more time (compared to only verifying identifications). The benefit of this process is that the verifier can focus his attention on the latent prints (not the entirety of the case) and there is immediate feedback to the case analyst if a technical error is noted. Another approach is to review all conclusions on selected cases (e.g. those that are randomly selected prior to assignment, or those that are selected based on crime type). And yet another approach is to perform random case audits. The downside to random case audits is the time delay between making the error and discovering the error; the analyst will likely not recall the circumstances that were involved.

The third step to managing error is to decide what to do when a technical error is discovered. Are there allowances for the number or frequency of technical errors? Are there different responses for different kinds of technical errors? The answers to these questions are largely agency driven.

As a manager, I have found that a formal corrective action has been beneficial in analyzing the factors that led to a false identification. These factors should not simply center on the analyst! Supervision and organization issues should also come to light during the investigation. Some factors may lend themselves well to preventive measures (such as the supervisor limiting the types of cases assigned to analysts under high levels of stress) and others may not be easily prevented (such as detectives repeatedly asking the analyst to hurry).

I do not recommend removing analysts from casework if a rare false identification is discovered; they have already punished themselves enough. However, I recommend that the analyst does not perform verifications for a period of time (at least 30 days). After the requisite time has passed, the analyst should successfully complete a proficiency test prior to performing verifications. Obviously, if an analyst repeatedly makes false identifications, then the response should be escalated because the analyst’s competency may be compromised.

False negatives are not as easy to manage because you need to track them and look for trends. Much can be learned from tracking the errors, including valuable feedback to the training program. Sometimes the reason for false negatives is relatively easy to address. For example, if a particular analyst is routinely failing to identify latent palm prints due to orientation problems, then dedicated practice orienting and searching palms will likely improve their performance.

Other problems, like backlog pressure, are harder to address. How do you insulate the analysts from feeling rushed because so many cases are waiting? I have found it helpful to keep the backlog out of sight and to throttle ten cases at a time to the analysts. The analysts can finish a batch of cases at a time (with occasional interruptions for cases that must be rushed, of course) and clear their desks.

The forensic examination of evidence is a high-stakes endeavor. Failure to connect a criminal to a crime may allow the criminal to continue to endanger society—while connecting the wrong person to a crime could take away an innocent person’s life or liberty. As such, the analysts in the forensic laboratory strive to be as accurate as humanly possible. I want to stress the word humanly. As humans, we are all prone to error. Forensic analysts will not be perfect. Mistakes will happen. Focusing attention only on the analyst is short-sighted at best. Analysts operate in a system, and that system can set them up for failure. Instead of pointing fingers and blaming the analyst, we should be asking these questions:

  • How did the system allow the error to occur?
  • What can we learn from the error?
  • How can we improve the system to minimize the number of errors?

About the Author

This e-mail address is being protected from spam bots, you need JavaScript enabled to view it is the Forensic Lab Manager of the Latent Print Detail of the Las Vegas (Nevada) Metropolitan Police Department. She is an IAI Certified Latent Print Examiner and a Distinguished Member of the IAI. Maceo continues to serve on SWGFAST and the NIST/NIJ Expert Working Group on Human Factors in Latent Print Analysis.


Return to the July 2011 Featured Products & Services Main Page

 
< Prev   Next >






Court Case Update

FINGERPRINT EVIDENCE went through a nearly three-year ordeal in the New Hampshire court system, but eventually emerged unscathed. On April 4, 2008, the New Hampshire Supreme Court unanimously reversed the decision of a lower court to exclude expert testimony regarding fingerprint evidence in the case of The State of New Hampshire v. Richard Langill. The case has been remanded back to the Rockingham County Superior Court.

Read more...