Depending on how long you have been a photographer, you may or may not have heard the term, dynamic range. But unless you’ve been living, and photographing, in a cave, you have certainly heard the term HDR used as a technique. HDR is an acronym for High Dynamic Range. In reality, the dynamic range part of both terms means the same thing, but HDR means a look, rather than the light in a scene.
So, let’s work on definitions first, and then we can differentiate between the two. Any scene, whether looked at through a camera or with the human eye, has a dynamic range of light. All this is referring to is the range of light in a scene from the darkest to the brightest. That range of dark to light is the dynamic range.
The problem for photographers is the amount of dynamic range a camera can capture. To make things worse, the human eye can see a much broader range of tones than the camera. The exact difference varies between people and different models of cameras, but know that it is a significant difference. You look at a scene with a bright sky above a dark green forest and you can see all the details. A camera, however, would expose for the forest and blow out the sky, or expose for the sky, and the forest would be almost black. A scene like that with a dynamic range that is too broad for a camera to capture is High Dynamic Range or HDR.
There are several ways to manage an HDR scene. The problem existed long before photo processing software existed. The way they handled it in the olden days, (before about 10 years ago), was to use a graduated neutral density filter. This was a piece of glass mounted by some means to the front of the lens, which had a graduated amount of density. At one end of the filter, the glass was very dark and at the other end, it was clear. The density graduates between the two extremes. The neutral part just means that it didn’t add any color cast to the scene. Think of a pair of sunglasses that are darker at the top than the bottom.
For the scene discussed above, the photographer would mount the filter so that the darkest part was at the top and the clear part at the bottom. Depending on the amount of dynamic range, a photographer would need to carry a variety of different densities or be able to stack the filters to add more density as needed. Many professional landscape photographers still use this method today.
With modern cameras and software, there is another method. In its simplest format, the photographer would take two exposures, one for the dark part of the scene and one for the bright part. Then, back on the computer, they would use techniques built into software to merge the two together. Depending on the scene, the photographer's patience and knowledge, and the desire to make every detail ‘perfectly’ exposed, the process can become much more complicated. Nowadays, photographers may take 5, 10, 20 or more exposures and merge all of them together in software.
And this gave rise to the HDR look. The technique, when taken to the extreme ensures that every element in the scene has the same EV, or exposure value. No scene in the actual world looks like that. There are always highlights and shadow. So, while everything may look perfectly exposed, you end up with an unnatural-looking scene. Some photographers make it worse by pushing the color saturation, causing it to look even more bizarre.
Some people like the look, some hate it. The photography as art crowd thinks that the result is all that matters and if it is pleasing to the audience, that’s fine. The get it right in the camera people believe that a photograph should represent exactly what was in the scene, no more and no less.
I fall somewhere in the middle. While I do believe photography is an art form where beauty is in the beholder’s eye, I certainly don’t have the patience to create these massive, multi-exposure HDR masterpieces.
What do you think?