Luminance levels detectable by our eyes range from starlight
(say 0.00001 candelas per square meter) to bright sun (say 100,000,000 cd/m2).
That's a range of 10,000,000,000,000 to 1. However, at any one time, the
retinas of our eyes, film of a film camera, and sensor of a digital camera can
only operate over a small part of that huge range -- you cannot see starlight
when your eyes are adapted to sunlight. Our eyes and cameras
use an aperature (and sunglasses or filters) to attentuating incoming light, providing
access to luminance levels brighter than what their sensor can handle, but at the cost
of losing simultaneous access to dimmer parts of the scene -- as
illustrated by the photos above. Whenever the light level exceeds the dynamic range
of the camera's CCD sensor, the sensor will either bottom out (record 'black') or
saturate (record 'white').
I didn't make the effort of finding primary sources for this, but here are often-quoted typical estimates
of maximum dynamic ranges of light sources (scenes):
|Sun-lit scene||100,000 : 1|
|Projected slide||2,500 : 1|
|Projected negative|| 250 : 1|
|LCD projector||200 : 1|
|Glossy print||60 : 1|
|CRT display||50 : 1|
|Newsprint||10 : 1|
... and here are often-quoted typical estimates of maximum dynamic ranges of light sensors:
|Eye||Perhaps >10,000 : 1|
|Slide film||250 : 1|
|Negative film||? Greater than slide film, say many sources|
|Digital camera||? Greater than film|
|High-end scanner||? Similar to film|
The maximum dynamic range of the light that can be capture by slide film
is 250:1; interestingly, apparently that produces a slide which when projected can generate
light levels with a range of 2500:1 (ie., the slide itself has greater dynamic range than the light
range it can respond to). Apparently our eyes do the reverse when dark-adapted (ie., when in a
dark room looking at slides), so projected slides look OK (see
(This may explain why photographing slides with ordinary film produces high contrast results.
Also, this means that capturing all the levels recorded in a slide requires a scanner
with a larger dynamic range than the dynamic range of a camera, and the values need to be
compressed to cancel out the dynamic range expansion introduced by the slide.)
If an image is displayed on a CRT display or print, the perceived image with have a maximum
dynamic range dependent on the capabilities of that medium (60:1 or 50:1, for CRT or
photographic paper). Common techniques for mapping an image with a dynamic range
larger than that of a display or print include 'gamma correction'
(non-linear mapping), 'curves' (eg., Photoshop), or (most directly) Photoshop's
Shadow & Highlight tool. Ansel Adams did it using 'zones' and darkroom dodging/burning.
All are manipulations (mappings) of ranges of the source image into the light levels that
can be produced by a display medium (eg., CRT or photographic paper).
So now we can explain what happened in the Mesa Verde scene above. The scene
has a high dynamic range; deep shadows and bright sunlight. The dynamic range may be
challenging even to an eye, but perhaps not noticed at the time because the eye can automatically
adjust as we gaze at different parts of the scene. But when we take a photo, the camera's
dynamic range (smaller than an eye) could be overwhelmed; there is no aperature setting that can
shift the camera's limited dynamic range to a point that covers the larger range of that scene,
so there is clipping to either white or black. However, we can cover the full range of the
scene by taking multiple photos, one at the bright end (but losing the shadows to black) and another
at the dark end (but losing the brights to white).
Let's take the 'good parts' of the two images above and merging them into one.
The image below is stitched together from the two images above, using Photoshop, by
dividing each image at the shadow line and then using the best half from each. This is
a time-consuming manual method of capturing the light of a scene with high dynamic range
by taking multiple photos while adjusting the camera's aperature so that each photo
captures a different portion of the scene's high dynamic range, then manually selecting
the 'good bits' from each to map them into a narrower range (that of the CRT).
Cliff Palace, Mesa Verde, Colorado, USA
The result above looks on my CRT more like the scene did to my eye, in my memory.
But here's a surprise! It turns out that the camera was able to capture
the dynamic range of the scene above -- the histogram of values within the image below shows that
the scene fell within the dynamic range of the camera (there are no peaks at all-black
or all-white that would indicate clipping):
Thus the problem is mapping the dynamic range of the camera to the lesser range
of a CRT (my CRT). The image below
is created by dividing the photo at the shadow line and then applying a mapping
(using Photoshop's curve tool) to shift the levels of the dark half higher. The histogram
of the resulting image reflects the shift (the two left-most peaks have shifted right, into
the zone best displayed by the CRT).
The dynamic range of the camera's CCD was able to capture the
range in the scene. However, the image produced
by the image on my CRT does not appear as the scene did
to my eye when I was there -- the shadowed objects are too dark. By using Photoshop to
make adjustments, I supply information about the scene not captured or computed in
the sequence of camera to CRT, with an outcome that looks on a CRT like the scene
did to an eye.
With that, I now have story to explain why images on my CRT
display or on photographic prints
don't always look as they did when I was gazing upon the original scene -- basically, my eye
has a higher dynamic range than the combination of my digital camera and my display media.
I look forward to cameras and especially display media with higher dynamic range.
Photoshop CS (version 8) introduced a function Adjustments->Shadow/Highlight
that automates the above process. Below is the same image as above after having been
processed with Shadow/Highlight. The result (image and histogram) is similar
to the manually-produced image, but without the considerable amount of tedious work!
Photoshop CS3 (version 10) introduced Merge to HDR (merge to High Dynamic Range) that
can merge several image which differ only in exposure level to create a single image with high
dynamic range. Of course to be viewed, this HDR image then needs to be mapped to the
narrower range of typical display media; CS3 offers several ways of doing that all-important mapping.
In 2009, the job of mapping a scene with high dynamic range to a narrower range was taken on by
a camera. Sony's α550 DSLR camera offers 'Auto HDR', where two photos are taken in rapid succession and then merged using an HDR algorithm.
(Corrections and improvements appreciated: