The Final Frontier: Automating Latent Print Examination
Written by Dale Garrison   

The Final Frontier:
Automating Latent-Print Examination

HOPES TO AUTOMATE one of the most important and time-consuming aspects of latent-fingerprint identification may have reached a significant milestone this year.

Phase II of a study unveiled by the National Institute of Standards and Technology (NIST) showed promising results with a new technology called Automatic Feature Extraction and Matching (AFEM). In simple terms, the test examined the accuracy of searching latent fingerprints when using beta AFEM systems developed by private companies. The Department of Homeland Security’s Science and Technology Directorate and the FBI’s Criminal Justice Information Services Division funded the research.

While the work involved cutting-edge technology that is not quite ready for widespread use, the preliminary findings indicate that large-scale and rapid searches can be done without impractical demands on examiners. The technology offers tremendous benefits to latent-print examiners at a time when their workloads happen to be increasing dramatically.

“There are so many places to search and so few examiners,” explained Peter Komarinski, a principal with Komarinski & Associates of New York and a participant in a NIST workshop on the study. “This can actually increase the information that goes into the computers. Local and state organizations could be able to search the FBI database without examiners spending hours and hours doing such manual work. This will open up many records that were previously considered to be too labor-intensive to search.”

Traditionally, an examiner seeking to match latent fingerprints carefully marks distinguishing features in each print. Those details help narrow the search, something that is especially important when using a large database such as the FBI’s Integrated Automated Fingerprint Identification System (IAFIS), which contains 55 million sets of fingerprint records. The problem is that examiners often need to process scores, or even hundreds, of prints—and marking just one can take up to 20 or 30 minutes.

Named the Evaluation of Latent Fingerprint Technology (ELFT) project, the study examined Software Devel-oper’s Kits (SDKs) from eight companies using blind “black box” tests for fairness. The SDKs were used on 835 latent-fingerprint samples while searching two separate databases, one with 50,000 fingerprints and the second with 100,000. The vendors’ software produced results that were at least 80 percent accurate. The software developed by one, NEC, scored as high as 97 percent. These are promising results for a technology that is still under development.

As with many advances in forensic science, the progress relies in part on improvements in computing power. “Auto latent matching (or AFEM) technology requires huge computing resources—several tens of times compared to the current technology—and such cost impact had been thought a severe obstacle in its implementation,” explained Masanori Hara, the project director for NEC Corporation. “However, the recent trend in declining computer costs helps customers to accept a little costly AFIS with integrated AFEM technology in the very near future.”

First Steps

Michael Indovina, the computer scientist who led the project for NIST at their Gaithersburg, Maryland center, also stressed that the work is preliminary. “NEC in particular had very accurate results,” he agreed. “Those were real. But the challenge will be putting that into practice.”

For example, he noted that even when systems are available that work without examiner markup, examiners still use mark-up nearly 95 percent of the time. “We would still need to determine whether you need to send it to an examiner or not, or when you do,” he said. “These are first steps in seeing how and if the technology works.”

Nevertheless, the results were positive and more definitive than any other technology examined up to this point. Although other studies have looked at the same concept, the NIST effort was the most comprehensive, utilizing a relatively open-ended procedure so that vendors could essentially present what they thought would work best. “There has been some testing by the producers and some by the users such as the FBI, but nothing like what was done at NIST,” Indovina explained. “It was fully open—anyone could participate. It was very much an open-testing scenario.”

The results are also leading to additional work, not only by NIST but also by other public and private interests. “It puts data out there that people did not have before,” he explained, comparing latent-print research to areas such as rolled prints. “This was really a first step into this area. A lot of work has been done in more conventional fingerprint areas, but little has been done in the area of latent prints.”

One limit involves the vast number of unknown latent prints. For example, crime-scene prints are obviously excellent candidates on which to test the new technology—except that, by definition, most lack a proven match that makes measuring the effectiveness of a new system difficult or impossible. “To me, the best thing to test actually comes from a crime,” Indovina agreed. “But they have to be identified by some combination of a system and an examiner. There is a little bit of a bottleneck there.”

Komarinski also cited real-world limitations such as agency budgets. “There are a lot of questions,” he said. “Even when software is available, that does not mean everyone can adopt it. Will your computer support it? Do you have money to buy it, and do you have money for training? All of this needs to happen.”

But while today’s work may represent “baby steps,” it is contributing to progress both directly and indirectly. “What is coming out of these studies is a lot of data that we are constantly reviewing,” Indovina added. “But that means the initial reports are not the end-all be-all of the process. This is just the beginning.”

Broad Application

Demand for these capabilities includes everything from Homeland Security to local police. Such an automated system could quickly scan records in a database that might otherwise take days for an examiner. This could help identify a suspect more quickly, or eliminate the number of false matches that an examiner must sift through when looking for a match in a large batch of candidates.

“AFEM would make the match response quicker because this process does not wait for a human examiner’s manual intervention,” NEC’s Hara noted. “Quick investigation through this rapid identification will increase arrest rates.”

Indovina noted other potential advantages of the new technology. An automated system could not only increase speed but also the “depth” of a search in ways not readily available now. Biometric techniques such as “fusion,” where different sets of information are combined, are a good example of where increased depth of searches is a significant advantage. Vendors participating in the NIST study utilized some examples of fusion in elementary ways. Ultimately, the technology could dramatically enhance some key areas of fingerprint matching.

“Say, for example, a person picks up an envelope with a finger and thumb,” Indovina said. “A skilled latent-fingerprint examiner will probably make an educated guess that those are fingerprints from the same person. The examiner may run the best one and then, if there is no hit, run the other. We looked at the possibility of running the first, then the second and combining results in order to combine your candidate list. That could really help bubble something up.” Ideally, such fusion would be an integral part of a final, working system.

The next step for NIST’s research includes Phase 3 of the current program. Phase 1 was a preliminary examination of the overall technology, while the most recent results were from the second phase. Phase 3 began in August.

“That phase will look at some of the same ground (image-only searches, where no markup is used), but the bulk of the study will be to examine manually generated feature searches,” Indovino explained. “There is a proposed guideline coming to enhance the standards that will be of particular interest to FBI and others.” Those standards involve the NIST Committee to Define an Extended Fingerprint Feature Set, which is examining how such image-only or “unmarked” prints could be used in revised standards.

The effort will also continue the general trend of the study’s second phase, seeking ways to ratchet up the accuracy and usability of automated technology. “Phase 3 will help that a lot with candidate-list reduction,” Indovina said. “Right now, the system comes back with a list of candidates and the length of that list is usually under the control of the operator. A skilled examiner is familiar with how to read that candidate list and filter out nonsense. But there is quite a bit of effort there. What we are asking is, ‘Are there reliable ways to cut down on the candidate list and just get down to the ideal of one candidate and reduce false alarms?’”

The attention is not lost on vendors, either. Hara noted that NEC is moving aggressively with research to further improve visual noise-cancellation technology that was among the performance-enhancing factors in that company’s leading NIST results.

“It is important to remove such noise to process low-quality latent prints with severe background noise,” he reported. “We have been researching diverse noise patterns one by one and developing solutions to remove each noise pattern. However, we believe that this tedious process and longer processing time would lead us to the practical solution.”

Global Scope

With improved accuracy and increased speed, automated systems offer capabilities that are almost like science fiction. Indovina said that ultimately a working AFEM could allow a latent print from a roadside bomb in the Middle East to be added to a watch list for terrorists trying to enter the United States.

“The main focus is still local law enforcement sending a print to be identified,” Indovina said. “But there are increasing cases where speed and volume are increasingly critical.”

Both Komarinski and Indovina doubt that human examiners will ever be removed completely from the process. “Computerization will probably never eliminate all of this,” said Indovina. “Some prints do not show under any visible light. Many latents are on porous surfaces that require chemical processes or other steps that a human must do. But one step now is a guy sitting down and marking up each print. This technology could help eliminate that. That is a big step.”

Komarinski used another skilled profession as an analogy. “We now expect examiners to do everything. They lift prints, mask backgrounds, make verifications, and testify. That is a lot of work. We don’t ask a neurosurgeon to schedule the patient or clean up in the operating room. We really focus their expertise at what they do best. All of these tests and this computer technology are looking at ways to really allow examiners to more fully focus on what they do best—and that is to identify a match.”

The work also offers a somewhat ironic return to some of the original issues in fingerprint science. Latent prints at a crime scene were among the first uses of fingerprint technology more than 100 years ago. Although Homeland Security and other recent additions have added tremendously to the applications, the current research in many ways refocuses on that original area which remains unsolved.

“That’s where fingerprint science really began,” Indovina concluded. “By definition, latents are bad prints but also, by delimitation, they can be from high-value subjects.”

About the Author

Dale Garrison is a freelance writer and a regular contributor to the magazine. He is based in Liberty, Missouri and can be reached by e-mail at: This e-mail address is being protected from spam bots, you need JavaScript enabled to view it


ORIGINALLY PUBLISHED:
"The Final Frontier: Automating Latent-Print Examination," written by Dale Garrison
September-October 2009 (Volume 7, Number 5)
Evidence Technology Magazine
Buy Back Issue

 
< Prev   Next >






Item of Interest

The language barrier between English-speaking investigators and Spanish-speaking witnesses is a growing problem. (Updated 28 February 2011)

Read more...