image

FIGURE 1-1 Jagged Mountain reflected in an unnamed lake, Weminuche Wilderness, Colorado

1 Landscape Photography Looks So Easy

Making a great landscape photograph seems simple. After all, a camera is just like your eye, right? Your eye has a lens that forms an image on your retina, which is how you see. A camera has a lens that forms an image on film or a digital sensor, and that becomes the photograph. What can be so hard? And yet I suspect that nearly all photographers have had this experience: you go hiking. You see something that moves you profoundly. You point the camera in exactly the same direction you were looking when you had that emotional experience and press the shutter release. Later you download the image to your computer so you can see it full-screen in all its glory, confident that you just made the best image of your career, and your reaction is, “Oh. I didn’t think it would look like that.” Why did the photo fail?

An anecdote here will give us a clue. A cataract is a defect of the eye that renders the lens opaque, so no light reaches the retina. At the beginning of the 20th century, eye surgeons developed a way to correct congenital cataracts. People who had been blind from birth suddenly, as adults, could “see.” Or could they? They had a clear, sharp image falling on their retinas, but they were still functionally blind. They mistook shadows for solid objects; they couldn’t recognize common objects when seen from unusual angles. They had trouble recognizing faces. Learning to read was often intensely difficult. Many gave up and resumed life as a blind person soon after the surgery. In one case, the patient could almost immediately recognize by sight some objects he had learned by touch while he was blind, but recognizing other objects was much more difficult. By itself, an image of an object formed on the retina is inherently ambiguous, since it could be a large object some distance away or a similarly shaped object that is much closer. An image can move across the retina because the object is in motion, the viewer is in motion, or both. The brain does a tremendous amount of processing on that flat, ambiguous image to create the perception of a stable, three-dimensional, emotionally meaningful world.

Clearly, the formation of an image on the retina is just the very beginning of seeing. In a similar way, making an evocative photograph involves a lot more than pointing the camera toward the subject and snapping the shutter—it must be an insightful, deliberate act. The photograph must provide all the visual clues necessary for the image to have impact.

So let’s go back to the initial problem: you go for a hike, see something beautiful, snap the shutter, and the picture is a disappointment. Somehow the experience of viewing the image on your monitor or in a print is not the same as viewing the real thing.

image

FIGURE 1-2 Claret cup cactus and dwarf evening primrose, Salt Creek, Needles District, Canyonlands National Park, Utah

Viewing a Print vs. Viewing Reality

As I analyze it, there are seven ways in which viewing a print differs from viewing reality. Let’s take each in turn.

1. Depth perception: We use many clues to figure out where things are in relation to other objects, both in a print and in reality. We use relative size: things look bigger when close than when far away. We use overlapping: if an object overlaps and partially obscures another object, the first object must be closer. We use the convergence of parallel lines: this is clearly seen, for example, in the way railroad tracks appear to converge in the distance. We use the pattern of light and shade: for example, we can only distinguish a sphere from a flat circle by the way sidelight reveals the sphere’s three-dimensional form. We use atmospheric perspective: distant objects appear bluer, hazier, and a bit less sharp than closer ones. All of these depth clues operate both in the real world and in a photograph, but two crucial clues do not: binocular vision and motion parallax. Binocular vision simply means that we have two eyes, which see nearby objects from slightly different angles. The image formed by an object on our right retina is therefore slightly different from the image formed on the left. Our visual system fuses those two images and gives us the perception of depth. Motion parallax refers to the way the relative position of two objects changes when we move our heads. For example, two objects that overlapped may no longer overlap when we move and see the scene from a different angle.

Both binocular vision and the lack of motion parallax tell us instantly that a photographic print is flat. If we want to create the illusion of depth, which is usually desirable in a landscape photograph, we have to work very hard to maximize the remaining depth clues. Once you’ve had that initial emotional reaction to a scene and you decide to take a photograph, you have to slow down and consciously construct an image that will appear to have depth. You can’t assume that the viewer will see depth in your print just because you saw depth while taking the picture. This may seem obvious—“Of course prints are flat!”—but it’s easy to forget this fact in the excitement of shooting the photo. I’ll talk more about creating a sense of depth in chapter 5.

2. Limited dynamic range: Our eyes can see a range of brightness, from brightest highlights to darkest shadows, that corresponds to about 13 to 14 f-stops. Transparency film is sensitive to a range of about five stops. Less expensive DSLRs can register maybe six stops. Both my Canon 1Ds Mark III and 5D Mark III can register 9 stops at the extreme limits. However, you can only get a range of about 5½ stops from any kind of print, whether inkjet or traditional wet darkroom. As I mentioned in the introduction, one of the fundamental problems in landscape photography is learning how to compress the very broad range of tones we observe in the real world into the much narrower range of tones we can reproduce in a print. You can see rich, colorful detail in both the shadowed flowers at your feet and the glowing clouds at sunset, but your sensor probably can’t. If you don’t take the limited dynamic range of your sensor into account, you may find that your highlights have washed out and your shadows have gone black. We’ll tackle this problem from a variety of directions in chapters 6 and 7.

image

FIGURE 1-3 Clearing storm over Mt. Alice, Rocky Mountain National Park, Colorado

3. Limited sensory input: You can’t smell the flowers in a photograph. When you’re standing there in the field, all your senses are working, not just your vision. You can hear the birds singing and feel the warmth of the sun and the cool freshness of the wind on your face. You can hardly ignore the ache in your legs after hiking for hours to that scenic overlook. The viewer of your print can only use their vision to take in the scene, which means the visual content must be very strong all by itself to convey the emotion you felt while taking the picture. When you are standing there composing the photograph, you need to consciously block out all the nonvisual sensations and ask yourself if the image you see through your viewfinder can create the effect you desire all by itself.

image

FIGURE 1-4 Columbine in Ruby Basin at sunset, with Ruby Lake and the Twilight Peaks in the distance, Weminuche Wilderness, Colorado

4. Brightness constancy: Your eyes have a property called brightness constancy. Brightness constancy is the ability of your brain to see objects as having the same brightness regardless of the level of ambient illumination, so long as the ratio of brightness values in the scene is constant. For example, your eyes see snow as a bright white, or something pretty close to white, regardless of the brightness of the ambient light. Obviously cameras have exposure meters to vary the exposure depending on the level of ambient illumination. Most of the time those meters work pretty well. However, in the case of snow, they can fail without some conscious input from the photographer. Meters are designed to measure the brightness of scenes that reflect an average of 18 percent of the light falling on them, and then to expose those scenes “properly,” which, to the camera, means a medium brightness midtone, or, in black-and-white terms, a middle gray. New-fallen snow, however, can reflect 90 percent of the light falling on it. Trusting your in-camera meter when photographing snow will result in gray, midtone snow. Since snow is normally about two stops brighter than rock, everything else in the scene, such as rocks, trees, and your friend’s face, will be underexposed by two stops.

image

FIGURE 1-5 Looking south from the summit of 14,018-foot Pyramid Peak at sunrise, Maroon Bells-Snowmass Wilderness, Colorado

image

FIGURE 1-6 Chair Mountain from Huntsman Ridge, proposed Hayes Creek wilderness area, White River National Forest, Colorado

Your eye easily compensates for the varying brightness of the light falling on snow when viewing the real thing, but will not make the same correction when viewing a print of snow if the snow is grossly underexposed because your eye calibrates itself to the average illumination in the room where you’re viewing the print. For your eye to perceive the snow in the print as being white instead of gray, about four times as much light has to be reflected off of it than the midtone wood paneling of the wall where the print is hung. In other words, your visual system will perform corrections on the real scene that it will not perform when viewing a print. You never see medium-gray snow in the real world, but it’s quite easy to see medium-gray snow in a photograph. That gives us a clue how to expose snow: take the lightest tone and make it white. In other words, meter the brightest snow and open up about two stops. This is another topic we’ll revisit in much greater depth in chapters 6 and 7.

5. Color constancy: Our eyes do not see color the same way that a camera’s sensor does. Both sensors and our eyes are sensitive to light with a wavelength between 400 and 700 nanometers. Sensors see color in a perfectly straightforward way: light of 700 nanometers is recorded as red, while light of 400 nanometers is recorded as violet (not purple). Light of other wavelengths is recorded as the colors in between. Our visual system does not see color this way. If it did, the colors of objects would appear to shift every time the color of the illumination shifted, from yellowish tungsten light to greenish fluorescent light to red sunrise light to white noon daylight. Our visual system actually constructs color in two ways: first, by comparing the wavelengths of light coming from different parts of the scene; and second, by comparing what we are seeing to our internal database of what we know the colors should be. We tend to accept the overall color of the scene’s illumination as white, regardless of the actual color of light, so we see a white sheet of paper as white regardless of the color of the illumination. This is called color constancy.

Color constancy is something to guard against when you are photographing a subject that is in the shade on a clear day. The light illuminating those shadows comes from the blue sky and will be rendered as intensely blue in your image. Photograph vivid yellow flowers or bright yellow aspen leaves in the shade on a clear day, and you’ll get sickly greenish-yellow flowers and leaves. Your eyes will color-correct an object in the real world if everything in your field of view is lit by blue light, but they won’t color-correct a photograph of that object because they are already calibrated to the ambient illumination of the room in which you are viewing the print. If you’re shooting closeups, you can, of course, change the white balance setting on your camera to shade or cloudy. If you’re shooting grand landscapes, however, where the background of your subject is lit by warm sunrise or sunset light and the foreground is in shade, changing the white balance may give the sunlit portion of the scene an odd colorcast.

image

FIGURE 1-7 King’s crown and skunk cabbage, Cumberland Basin, La Plata Mountains, Colorado

Color constancy can also catch you off guard if you shoot a closeup of purple and blue flowers lit by direct sunrise light. Since everything in your field of view is lit by warm light, your visual system tends to ignore the light’s true color and see the flowers as if they were lit by relatively white light. Purplish-blue columbine can be rendered with such a strong magenta cast that you may wonder if you’ve just discovered a new species.

image

FIGURE 1-8 Careful camera placement and composition let me emphasize the flowers and mountains while minimizing the gray talus in the mid-ground in this image of a clearing storm over the Needle Mountains, Weminuche Wilderness, Colorado

6. Clutter: We think that we see the world by taking it all in with one big gulp. Indeed, our peripheral vision has an enormous field of view: about 180 degrees left to right and about 130 degrees top to bottom. But that’s not actually how we examine the world. We actually only see sharply in an extremely limited angle of view because the region of the retina where the receptors are small enough and packed densely enough to see sharply is very small. This region of the retina is called the fovea. Foveal vision has an angle of view of only one or two degrees. That’s roughly equivalent to a 1000mm to 2000mm telephoto lens. When viewing the world, our eyes fixate on a point of interest for about 300 milliseconds or so, then jump to the next point of interest. Eye movements are very fast—perhaps 25 to 45 milliseconds—and no real perception occurs during the movement. Our eyes actually jump around constantly, pausing briefly at regions of interest and skipping everything else.

Cameras have no such ability. In other words, let’s say you’re standing at the edge of a field of wildflowers and there’s a magnificent mountain, bathed in sunset light, rising above your head. Unless you consciously train yourself to do otherwise, your eyes will jump from flower to flower, skipping over all the greenery in between, and will then jump all the way up to the mountaintop. You may not realize that there are really only a few flowers at your feet and that boring gray talus fills the middle third of your picture. When you look at a print of the same scene, your visual system does not perform a similar decluttering. We still examine the print using saccadic eye movements, but the effect is different. One reason for this may be physiological: when viewing the real world, our eyes have to swing through a large arc to go from the flowers at our feet to the mountain high above. When viewing a typical print from a typical viewing distance, our eyes travel through a much smaller arc, allowing us to observe every detail. Another reason for this may be cultural. When we view a print, it is typically framed and hung on a wall. It is being presented to us as something worthy of close inspection, so we tend to look at it more carefully. Regardless of the reason, it is certainly true that our visual system will skip over clutter in the real world that it will not skip over in a photograph.

7. Focus: Our eyes focus and refocus so rapidly as we scan a scene that we are rarely conscious of the process. As a result, everything we see normally looks sharp. (Okay, middle-aged photographers like me who wear glasses with progressive lenses might beg to disagree, but in general it’s true.) A carelessly snapped photograph, however, may or may not have sufficient depth of field to create a convincing illusion of reality. Our eyes obviously cannot correct blurry areas of a photograph and make them sharp.

image

FIGURE 1-9 Mount of the Holy Cross, Indian paintbrush and lupine, Shrine Ridge, near Vail Pass, White River National Forest, Colorado

Clearly, creating an evocative landscape photograph is not as easy as it first appears. As we’ve all experienced, capturing what you see is easy—just put the camera to your eye and press the shutter release. Capturing what you feel, however, is harder. Hardest of all is capturing what you feel in such a clear and compelling manner that your image causes the viewer to experience the same emotion you felt when you took the photograph. That’s when photography can become an art.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset