Photography: Image Stacking
Written by Sandy Weiss & Arthur Borchers   

ONE OF THE MANY CONUNDRUMS to be solved before directing the technological growth of a developing world will always be properly defining the word impossible. It doesn’t sound too difficult at first, but upon second consideration it seems there are two proper definitions. One is, what is truly impossible, and the other is, what is impossible given the technological advancement of the day.

For example, which definition would a scientist or engineer use when discussing manned flight at the end of the 19th Century? Until 1903, when the Wright Brothers flew their plane at Kitty Hawk, it would seem either definition might fit. Then, by 1927, Lindbergh flew non-stop across the Atlantic. By the 1940s, the world was familiar with P-51s, Zeros, and Messerschmitts. Apollo 8, on Christmas Eve 1968, became the first manned mission to the moon. What definition would they have used for that journey in 1903, just 65 years before? Look at the changes made in only one generation of man—one out of millions.

Even today, the definition of impossible is continually shifting. One of the technologies where the definition has changed repeatedly and deeply over the last few years is photography. Digital imaging has stood the word impossible, as it relates to photography, on its ear!

One term whose definition was etched in stone since the study of optics and photography began is depth-of-field. It’s always been something like “the optical phenomenon known as depth of field (DOF), is the distance about the plane of focus (POF) where objects appear acceptably sharp in an image.” (Wikipedia)

Background

When a Frenchman named Niepce captured a more-or-less permanent image of the view outside his window in 1827, he was elated with his accomplishment but realized immediately that what he saw and what he got were two very different things. Reality and his image of reality were not the same as what he could see with his eyes. Many aspects of the scene were missing. Things like color, depth, movement, and the majestic size of the scene itself were left behind. But it was an image after all, and that was the start of a technological marvel still evolving today.

Word of his success made its way through the photo community of the day and from there the process improved and expanded slowly until 1839, when the French government purchased the patents it had granted Niepce and the one newly granted to the next man, Louis Jacque Mande Daguerre, for his more practical version where he captured images on copper plates coated with salts of silver. His camera had one lens and one receptor, not at all like the visual system of the human eye.

The French made a leap of faith and presented both patented inventions to the world as gifts, fully expecting the technology to expand exponentially from there. Which it did. Photography was one of the hottest things at the time and a commodity many craftsmen and artists wished to incorporate into their lives. "To nineteenth-century enthusiasts of this new art, the making of a photograph, fixed forever a moment of time, and resembled an act of magic” (Weber)—and of course, everyone loves magic.

But there were still those same things missing: the representation of color, depth, movement, and size. The first one from the list to be overcome was size. Initially, the size of a visual subject, unless it was a macro shot at 1:1 magnification, could not be properly represented. To cope with this problem people invented ways to create enlargements of their images by projecting small captured images onto large screens or walls for group viewing.

A better understanding of visual perspective, especially as it relates to how people perceive depth with binocular vision, came next. Up until then, many people believed we had two eyes simply to see more of an object. Those people theorized the possession of two eyes allowed all binocular creatures to see around and into things. There was still very little understanding as to why drawings and paintings appeared flat, despite great efforts by artists to accurately reproduce depth in the subjects concerned.

Photography of the time was providing only the same visual effects as could be made by all other two-dimensional representations—essentially the image of a moment but not the moment itself. Means to improve images in order to more closely duplicate reality were studied. It did not take long to find ways to make captured images into representations in three dimensions instead of just two. The first 3D photographs were captured in 1839 by a camera with two lenses and two image receptors, along with a brilliant way to view the captured images. It was the same year Daguerre was recognized for the beginning of modern photography.

The American Civil War 1861–1865

Sadly, there’s nothing like war to spur the development of technology. What was missing in standard 2D images and then provided by 3D representations of the same scenes was the perception of depth. Two-dimensional photography was simple and expedient, but 3D provided additional visual information. It was and is still a conundrum. Amazingly, nearly all of the images captured on Civil War battlefields, from hot-air balloons and behind the lines, were shot in 3D. The reason this fact is so little-known today is most books and Internet sites only present half of each pair of images for today’s viewers. What happened to the depth?

3D went through a boom period, followed closely by the bust. Why make things in 3D when most viewers are satisfied in 2D? If book publishers and Internet developers put both halves of every image into their books and onto screens, it costs more, and viewers need special 3D glasses in order to view the images.

The next frustration to be addressed with photography was motion. This problem was not as easy to solve as depth. It took Thomas Edison and his cohorts until 1888 to invent the first practical systems of motion-picture photography. Then, very quickly, by 1891, Edison was experimenting with the idea of 3D motion film: stereoscopic film capture and projection.

After the motion issue was solved there was still no color in photographic images; that took until well into the 20th Century to overcome. There was not an overwhelming need for color, because black-and-white images were easily accepted as representations of a more colorful reality due to familiarity with newspapers. Over time, people become conditioned and are then usually able to recreate reality in their minds by viewing images only somewhat representative of that reality. In those days, human eyes were accustomed to the graphic representation of the world as being something less than we do today.

One factor allowing modern people to accept photographs of all sizes—films on screen in the theater or on the phone—as reasonable facsimiles of reality is that we see more or less what we expect to see. We look at graphic representations of everything from toothpaste tubes to subway trains, automobiles to airplanes, insects to elephants from the moment we open our eyes in the morning until the instant we go to sleep.

By definition, an impossible object is a type of optical illusion. It consists of a 2D figure which is instantly and subconsciously interpreted by the human visual system as representing a projection of a 3D object. (Wikipedia, “Impossible object”) According to Moenssens & Inbau, “People are so used to seeing 2D representations of 3D subjects that most minds automatically reconstruct the missing 3D elements in the image being examined.” Has this type of conditioning slowed the development of new ways to capture images? In a way, yes… but not really.

Depth

The software-enabled digital process creates the advantage of depth of field in multiple stages across a set of specially exposed, similar images.

The 1960s solution to limited depth of field in macro photography, which always seems to be the easiest or quickest to solve, was called scanning instead of stacking. There was a very complicated equipment setup to be constructed and a lot of other equipment to gather in order to start taking images. The examples below (Figure 1) give an idea of what could be done.


Figure 1. (Left) The subject, a toy camera, is shown with a quarter and a screw for scale. (Center) An un-cropped image of the toy camera utilizing macro-scanning photography. (Right) A print of the resulting image next to the actual toy camera.

The little blue camera is 0.75 in. wide and 1 in. tall. To make a photograph shot at the extreme angle from the top, as in the center example (it is not cropped), is impossible optically. The only way is to expose the frame in a darkened room while slowly moving the toy camera through a thin beam of light (the target distance is multiplied by the optimum depth of field at the optimum lens aperture). Alternately, you can expose the image in a darkened room while moving the narrow beam of light, slowly and smoothly, while keeping the cameras totally still. Editor’s Note: If anyone wants to know more about this process, please do not hesitate to contact Sandy Weiss at the information included. He is one of the best living experts on macro-scanning photography.

Software

Focus-stacking software takes a set of images with the focal point set at different locations in the frame, selects the sharpest areas of each image, and intelligently combines them into one image. Commercially available software packages include Helicon Focus by Helicon Soft Limited, Photoshop by Adobe Systems Inc., and Zerene Stacker by Zerene Systems, LLC. All three operate on the basic premise of using a series of photos of the same subject, extracting the parts of each image that are crisply in focus, and then combining all the parts into one complete image.

In the following series, 21 images were captured using a focus rail mounted between the camera and tripod. The lens was set to manual focus and not touched between images.


Figure 2. The in-focus areas of the face and chain are highlighted between the two lines.

The focus rail was moved 2 mm between each shot, changing the focal point of each image. The camera was moved just over 4 cm to capture all of the images. A simple one-axis focus rail can be obtained for under $20 from online retailers. This final image was produced using Helicon Focus software.


Figure 3. Helicon Focus result.
The Photoshop image below used the same source images but produced a bit larger result with some questionable detail at the bottom that was automatically cropped out in the Helicon version.


Figure 4. Photoshop result. The importance of this crop effect (shown inside the yellow box) only matters if there is significant evidentiary detail in that deleted area.

Case Application

This first sample scene has ten yellow evidence tents along the sidewalk at 5-ft. intervals. The photo shot at f/22 (Figure 5) captures almost the entire 50 feet in focus.


Figure 5. A 50-foot section of crime scene was captured with a Canon 6D Mark II with a shutter speed of 1/50 and aperture of f/22.

The second sample scene is a set of images shot with an f/1.4 lens set at f/1.4. The images were manually focused on each tent and then stacked.


Figure 6. The same section of crime scene was captured with a Canon 6D Mark II with a shutter speed of 1/1600 and aperture of f/1.4.


Figure 7. A series of images captured with different focus points were combined using Helicon Focus software to produce this image.

Getting all the crime scene tents in focus over a big distance is exactly the point. What if that image is turned into a poster-size court display? You only need one exhibit instead of ten. If there is any question as to whether the image accurately depicts the scene, the original images can be displayed, and the software process can be reproduced live in a court hearing.

Both Helicon (Mac, Windows, iOS and Android) and Zerene (Mac, Windows and Linux) have the ability in their programs to connect to your camera with a cable and take complete mechanical control of your exposure and lens focus steps from near to far. Additionally, DSLR Controller (about $8) is an Android tablet program that will automate taking your stack photos with either a cable or Wi-Fi connection. Your camera manufacturer may also have similar smartphone or tablet software available.

One final stack function may be of great help for crash investigators. Serious crashes rarely happen during the day; returning to the scene in daylight may reveal additional details. However, you are unlikely to have the same ability to close the road. This Photoshop process compares several images and only uses median image portions that are common to the majority of stack. Since cars are moving from frame to frame, the only consistent part of the image is the roadway. The software will remove the cars and leave a clear road with whatever skids, scrapes, gouges and evidence paint was left behind.

The photographic process is to capture a series of photos, perhaps 20 to 30 images about 10 seconds apart, over 3 to 5 minutes with a tripod-mounted camera. An inexpensive intervalometer or built-in DSLR software is all that is needed. Before and after images are shown below (Figures 8 & 9).


Figure 8. When a stretch of roadway is open to traffic, it can difficult or impossible to capture a moment when it is free of vehicles.


Figure 9. By shooting a series of exposures over a period of time, one can nearly eliminate all traces of vehicles traveling on the roadway. Visual remnants of a couple vehicles can be seen in the upper right. If there is important information in that area, it can be easily repaired using Photoshop’s spot healing tool.

Forensic photography can be much more effective in telling the story of an incident if you know that highly effective tools are available for reasonable cost, and your photographs are taken with the intent of telling that story in a clear, accurate way. There is nothing preventing you from taking advantage of the technology as long you truthfully portray the conditions of your scene.


About the Authors

Sandy (Sanford) Weiss is the author of Forensic Photography; the Importance of Accuracy, Published by Pearson, Prentice - Hall. He retired from active field-work in 2012 but continues to keep up with developments. He published several articles in Evidence Technology Magazine over the years.

Arthur Borchers is currently an adjunct instructor for the Suburban Law Enforcement Academy at the College of DuPage and a Forensic Consultant with Larsen Forensics & Associates, both in Glen Ellyn, Illinois, having advanced training and experience in photography, photogrammetry, firearms, shooting incident, crime scene and traffic crash reconstruction after having retired from the Oak Park Police Department in 2013.


Resources

Moenssens, A.A. F.E. Inbau. Scientific Evidence in Criminal Cases. Cleveland, Ohio: The Foundation Press (1986).

Weber, E. Pioneers of Photography, New York, NY: Smithmark Publishers (1995).

Wikipedia contributors. “Depth of field,” Wikipedia, The Free Encyclopedia. Retrieved July 14, 2018 from: https://en.wikipedia.org/w/index.php?title=Depth_of_field&oldid=849789241

Wikipedia contributors. “Impossible object,” Wikipedia, The Free Encyclopedia. Retrieved July 15, 2018 from: https://en.wikipedia.org/w/index.php?title=Impossible_object&oldid=844470487


This article appared in the Fall 2018 issue of Evidence Technology Magazine.
Click here to read the full issue.

 
< Prev   Next >






New Books

Bloodstain Pattern Analysis

Most forensic disciplines attempt to determine the “who” of a crime. But bloodstain pattern analysis focuses on the “what happened” part of a crime. This book is the third edition of Blood-stain Pattern Analysis. The authors explore the topic in depth, explaining what it is, how it is used, and the practical methodologies that are employed to achieve defensible results. It offers practical, common-sense advice and tips for both novices and professionals. www.crcpress.com

Read more...