Technical Intermediate

HDR

A technique that combines multiple exposures of the same scene — typically bracketed at 2-stop intervals — to capture detail across a wider range of brightness levels than a single exposure can record, then merges them through tone mapping into a single image.

What Is HDR?

High Dynamic Range photography is a capture and processing method designed to overcome a fundamental limitation of camera sensors: they cannot record the full range of brightness that exists in many real-world scenes. The human eye can perceive roughly 20 stops of dynamic range through adaptation, while even the best modern sensors top out at about 14-15 stops in a single exposure. HDR bridges this gap by combining information from multiple exposures into one image that contains detail in both the deepest shadows and the brightest highlights.

Think of it like human hearing. A single microphone has a fixed sensitivity range — it clips on loud sounds and misses quiet ones. An audio engineer solves this by recording the same performance with multiple microphones at different gain levels, then mixing the best parts of each recording into a final track that captures both the whisper and the crescendo. HDR photography does precisely the same thing with light instead of sound.

How It Works

The Capture Phase

HDR begins with bracketed exposures — a series of photographs taken at the same composition but different exposure values. A typical HDR set consists of 3 to 7 frames spaced 1 to 2 stops apart. The darkest frame preserves highlight detail (cloud texture, specular reflections, bright sky), while the lightest frame reveals shadow information (deep forest floor, interior corners, underside of bridges).

Camera support is critical for clean alignment. A tripod eliminates frame-to-frame shift, though modern HDR software can align handheld brackets with sub-pixel accuracy. Aperture should remain constant across the bracket set — changing it alters depth of field, which complicates merging. Shutter speed is the preferred variable.

The Merge

Merging combines the tonal data from all frames into a single 32-bit floating-point image. This intermediate file contains far more brightness information than any display can show — it is a mathematical representation of the scene’s full luminance range. Software like Lightroom, Photomatix, and Aurora HDR perform this merge automatically, aligning frames, removing ghosting artifacts from moving elements, and producing either a 32-bit TIFF or a high-bit-depth DNG.

Tone Mapping

The 32-bit merged file must be compressed into a displayable range — typically 8-bit (256 tonal levels per channel) for screen viewing or 16-bit for archival editing. This compression is called tone mapping, and it is where HDR processing either succeeds or fails aesthetically.

Tone mapping algorithms fall into two categories. Global operators apply a single curve to the entire image, similar to a levels or curves adjustment. Local operators analyze regions of the image independently, boosting contrast in areas that would otherwise appear flat. Reinhard tone mapping, developed by Erik Reinhard in 2002, was among the first widely adopted algorithms. Modern implementations like those in Adobe Camera Raw use adaptive tone mapping that blends global and local approaches.

Practical Examples

Sunset landscape. The sky at sunset may measure EV 16 while shadowed foreground sits at EV 6 — a 10-stop difference. Capture 5 frames at 2-stop intervals on a tripod. Merge in Lightroom Photo Merge HDR, which produces a DNG with expanded tonal range. Apply gentle tone mapping: bring highlights down 60-80 points, lift shadows 40-50 points, and add moderate clarity. The result shows gradient color in the sky alongside textured, detailed foreground without the artificial look of aggressive processing.

Interior architecture. A cathedral interior with stained glass windows presents extreme contrast. The windows may be 12 stops brighter than the nave. Bracket 7 frames at 2-stop intervals. During merging, enable deghosting at medium strength to handle tourists who moved between frames. The merged result reveals stone texture in shadowed columns while preserving the jewel-like color of backlit glass.

Forest canopy. Dappled light through trees creates a patchwork of brightness — sunlit leaves at EV 15, shaded trunks at EV 7. Three frames at 2-stop intervals capture the full range. Process with conservative tone mapping to maintain the natural contrast between light and shade that makes forest scenes compelling.

Automotive photography. A polished car body reflects sky brightness while its undercarriage sits in deep shadow. Bracket 3 frames at 1.5-stop intervals. The HDR merge preserves both the mirror-like reflections and the mechanical detail underneath. This is standard practice in commercial automotive photography.

Advanced Topics

Single-Exposure HDR

Modern sensors with 14+ stops of dynamic range enable a technique called single-exposure HDR or pseudo-HDR. By shooting a single RAW file and processing it with aggressive shadow lifting and highlight recovery, photographers can approximate the effect of a merged bracket set. The Sony A7R V and Nikon Z9, with their backside-illuminated stacked sensors, are particularly capable here — their shadow noise at ISO 100 is low enough that lifting shadows 4-5 stops produces usable results.

This approach has limits. Pushed shadows carry more noise than properly exposed brackets, and highlights that are truly clipped (reading 255, 255, 255 in the RAW data) contain no recoverable detail regardless of sensor quality.

In-Camera HDR

Most modern cameras offer automatic HDR modes that capture and merge brackets internally. The iPhone has used computational HDR since the iPhone 4 (2010), merging frames in milliseconds. Samsung and Google Pixel devices use multi-frame HDR+ processing that captures up to 15 frames and selects the best data from each. Dedicated cameras like the Fujifilm X-T5 offer in-camera HDR with adjustable strength.

In-camera HDR produces JPEGs, not RAW files, which limits post-processing flexibility. For critical work, manual bracketing with RAW capture and desktop merging remains the standard.

The Grunge HDR Problem

Between 2006 and 2012, aggressive HDR processing became a widespread trend. Photographers pushed local tone mapping to extremes, producing images with halos around high-contrast edges, unnaturally saturated colors, and a hyper-detailed, painterly look that bore little resemblance to reality. This style, sometimes called “grunge HDR,” gave the technique a reputation for producing garish, unrealistic images.

The backlash was strong enough that many photographers abandoned HDR entirely. The technique itself was never the problem — the over-application of local tone mapping was. Modern HDR processing emphasizes subtlety: the goal is an image that looks like what the eye saw, not a special effect.

Display-Referred HDR

HDR display technology has added a new dimension. Monitors and phones with HDR10 or Dolby Vision support can display up to 1,000-4,000 nits of peak brightness, compared to 100-500 nits for standard displays. Photographers can now create images that take advantage of this expanded display range, producing specular highlights that genuinely glow and shadow gradations that remain visible. Apple’s ProRAW format and Adobe’s HDR output in Lightroom are early tools for this emerging workflow.

ShutterCoach Connection

ShutterCoach examines your images for signs of dynamic range limitations — clipped highlights in skies, crushed shadows in foregrounds, or loss of detail in high-contrast transitions. When these issues are detected, the feedback explains whether HDR capture would have preserved the missing information and suggests specific bracket settings based on the estimated scene contrast, guiding you toward natural-looking HDR results rather than over-processed effects.

See how ShutterCoach evaluates hdr in your photos

Get instant AI feedback on your photography, including detailed analysis of technical.

Download ShutterCoach