Investigating Image Authenticity
Written by David Spreadborough   

I ALWAYS USED TO SHY AWAY from image authentication. It seemed a bit of a “dark art,” requiring some serious knowledge of various Matlab scripts to get successful results. As the only police officer within the forensic image unit of a UK police force, and the most experienced in digital images, the requirement to authenticate an image passed my desk on the odd occasion. Several times though, we were not able to conduct the examination ourselves which often left me frustrated. When I left the service, after the closure of the unit, image authentication was high up on my agenda of subjects to learn.

All the puzzle pieces fell into place when I became the international trainer at Amped Software, developers of solutions for image and video forensics. Very quickly, I learned that many of the mysteries surrounding the identification of image manipulation had been solved and that it was no longer necessary to keep the requirement in the “Too-Hard-To-Do Box”!

One of the exciting parts of forensic multimedia investigation is that every case is different. That does make for some interesting challenges when it comes to developing software, as it needs to be able to do many things, for many people, in many ways.

When developing authentication software, we look at the questions that may be asked of the analyst when investigating a digital image. The application has been designed to allow a structured workflow, and locate the puzzle pieces required to assist in answering those questions.

Let us look then at two examples where a question has been asked, and it is up to us to find the answer.

In a case involving a residential house search, several digital items have been found and seized. Of particular interest was a camera that included a memory card. Separate to this was another memory card containing images.

During an initial interview, the homeowner admitted that the camera and first memory card were his and that he took the pictures. He denied all knowledge of the other memory card and could not explain how it came to be in its location.

The task has now arrived at my desk with several requests in order to assist the investigation.

1) Compare the images that were on the camera memory card against those on the separate card. Do the formats match?

2) Analyze the camera and separate card. Have the images been taken on that device?

3) Report on any other issues identified during analysis.

After conducting the usual preparations of the exhibits, and creating our forensically sound copies, it’s time to start the analysis.

A quick comparison is required between one of the images in the suspect SD card to those found within the camera. Luckily, our image authentication software has a tool for this.

Within a few seconds the chart appears, enabling the quick review and comparison of one of the suspect images against the directory of known images.

It has been relatively easy to show a matching format comparison, but this only shows that the images were taken on the same make and model of device.

To answer the next point, regarding matching the suspect images to the seized camera, we need to turn to the uniqueness of the imaging sensor found inside the camera.

Using the suspect camera, we can create our own set of sample images. These will be used to construct a PRNU (Photo Response Non-Uniformity) Camera Reference Pattern.

You will probably have guessed by now that this is another simple process. Investigations are hard enough as they are, without software making things more difficult. It is one of the promises to our users. We will do everything we can to make the processes and software as simple as possible.

After pointing the reference pattern filter to the set of newly created images, it compares the noise in the images. If they match, then there is a positive compatibility.

At the click of a button you can analyze all the images in the evidence folder.

Even if the EXIF data has been modified, and even if certain image-processing techniques have been used, identifying and matching noise can still match a camera to the image.

Even though we have a positive outcome, we still need to test this theory against other images. There are thousands of imaging devices available, and many of these would be very difficult to source.

Using the power of the Internet, you are only a few minutes away from sourcing completely random images from the same camera make and model to validate your findings.

The last point in this small investigation is quite an interesting challenge. I often see the “other issues” question in requests but it relies on the competency of the analyst to spot and investigate issues that could cause concern. Many concerns can be accounted for, but the causes for them may not be known to the analyst at the time of the investigation.

During the analysis of the card, one image may have been marked in red, indicating a warning. It may be required to then look further at this image and conduct more in-depth analysis. In this next case, we will look at just that.

This investigation example involves a directory full of miscellaneous images that have been sent in to the police department. Within the image authentication software, I have several different options when dealing with a scenario like this.

I could examine each submission individually but this may be a little time consuming. To sort out what needs looking at, or what has warnings, I could examine each directory full of images in batch mode. Batch analysis allows a more automated approach and it’s even possible to scan through sub-directories. This allows us to preview the individual file analysis of many images inside many folders.

Finally, thanks to the Command Line Interface (CLI) built into the software, it is possible to sort images automatically. This is great when images are recovered through an acquisition process, placed into a directory and then require sorting to identify camera originals or those that may have been edited.

Very quickly, I get two new directories, one for all the OK Images and another for the ones with a warning. I also get a report in a spreadsheet format that details the causes of concern.

In our case though, we simply have a single directory. As such, I need to go directly into the authentication software and use the tool for “Batch File Format Analysis”.

As it only involves the extraction of metadata components, it is very fast.

Immediately I can visualize what images I am able to put some reliance on. I could then, perhaps, leave the images marked in red (which indicates a warning), until later.

During the investigation, the requirement to analyze one of these images becomes paramount. There were some red warning indicators but, due to what the witness has stated, we must authenticate the image before we can rely upon their version of events.

There could be a lot of weight given to this image and many hours of work directed towards the intelligence or leads that it portrays.

There are various options available after conducting the Batch Format Analysis, and I need to load the image as evidence.

Again, we could go through each filter individually; however, I like to multi-task, so running the batch analysis allows me to process all the filters while I get on with something else. Within about five minutes, all filters have been applied, and I can quickly scan through without waiting for individual filters to process. The larger the image, the longer it will take.

It is important at this point to be aware of context and this is different to having information that could result in some cognitive bias. Context awareness allows an analyst to concentrate on the areas that are relevant to the investigation.

In my image, the roof and structure are irrelevant, so I may not look closely at those parts of the image. The people who were in the train station form part of this investigation so that is where I will start my observations.

A visual inspection of the image did not immediately reveal anything, and this is not unusual.

Strange shadows, disjoined bricks, and irregular scale are all signs that something is not quite right. The challenge though is with our minds. Once we are told a little context, which we need in order to direct our investigation, we fight against the image and start to believe what we see. “Seeing is believing” is a very well-known phenomenon.

For this reason, we must place our trust on the filters. Once they pick out an area of concern, your brain switches back to normal and you will see the problem area immediately.

Let’s move straight to the local analysis for this case. We have the EXIF data report with some Photoshop activity and we have signs of recompression, but all of these could be explainable. Although the image is not a camera original and in fact has been resaved in Photoshop, the image may still show a scene that is an accurate representation of that moment in time.

Only in-depth local analysis of the pixel and block structure will allow you to identify certain manipulation artefacts.

It doesn’t take long for the problem to appear!

Did you spot the person where the edges are brighter than the rest? Error-level analysis has helped you out! Let’s look again, this time a little more closely, using the Artefacts of the Discrete Cosine Transform (DCT) in JPEG images.

Something’s not quite right here!

Using the analysis of the JPEG compression artefacts, the area of concern is plain to see.

This person was never there. He was never at the train station at this date and time.

If the image had never been authenticated, a lot of time, effort, and investigative resources could have been put into this evidence.

Therefore, to the question of, “Can I rely on this image as being a true and accurate representation of the scene?” my answer would have to be, “No.”

The artefacts highlighted by the software have indicated a specific area where the structure of the blocks is different from the overall scene. The weight of this decision will increase with the number of filters that show changes. Many filters flag up in a red color if there is a warning associated with that test.

However, the variability of manipulation means that sometimes these will be false alarms. On other occasions, they will indicate an issue that is not relevant to the question being asked. If the area of change is also small, this may fail to trigger an alarm.

The best tools in the toolbox are not what’s in the software—it’s in you! It’s your eyes and your experience through training and casework. Software is the manager, the processer, the sorter, and the filter. It’s up to you to view the results and then draw a conclusion using your experience and knowledge.

There are many other tools and filters in image authentication software that I have not even touched on—ranging from the simple, but invaluable searching of image format types or visual similarities, to the compression history and encoding analysis.

The purpose of this article, though, was not to show you everything, as that would take days. Hopefully it has made you think. Should you be checking your images, running some tests to ensure they are what they purport to be? If the answer is Yes, then you now know where to start.

About the Author

David Spreadborough served as a UK Police Officer for 24 years, was the first analyst in Europe to be LEVA Certified, and is still one of only four certified outside North America. He was a Senior Police Officer within the UK Visual Forensic Unit and oversaw all major crime video investigations. He is still a practicing forensic video analyst and has frequently been called as an expert witness to assist legal teams and law enforcement with ongoing criminal investigations. Since working with Amped, Spreadborough has provided a key role in the development of Amped software technical training, as well as spreading his passion for jurisprudence reform through the latest technological innovations.

This article appeared in the Winter 2017 issue of Evidence Technology Magazine.

< Prev   Next >

Court Case Update

FINGERPRINT EVIDENCE went through a nearly three-year ordeal in the New Hampshire court system, but eventually emerged unscathed. On April 4, 2008, the New Hampshire Supreme Court unanimously reversed the decision of a lower court to exclude expert testimony regarding fingerprint evidence in the case of The State of New Hampshire v. Richard Langill. The case has been remanded back to the Rockingham County Superior Court.