Main Understanding Photography: Master Your Digital Camera and Capture That Perfect Photo

Understanding Photography: Master Your Digital Camera and Capture That Perfect Photo

0 / 5.0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?

Comprehensive, heavily illustrated volume introduces the concepts and techniques of digital image capture, including exposure, composition, histograms, depth of field, advanced lighting, lens filters, shutter speed, and autofocus.

Understanding Photography will teach you the core concepts that underlie the magic of digital photography with highly visual, clear, and comprehensive explanations. Topics covered include the fundamentals of exposure, how lens choice affects creative control, digital image characteristics, and how to make the most of natural light.

You'll learn:
- Basic concepts in photography like camera metering, depth of field, and the rule of thirds
- Features specific to digital cameras like bit depth, digital sensors, and image noise
- How to control perspective and style using lenses
- How to use equipment and accessories to enhance capture
- Advanced exposure techniques for photographing in fog, mist, or haze
- Techniques for improving hand-held shots and mastering autofocus

If you yearn to understand the digital photography hobby at a deeper level, or you simply want to take better photos, Understanding Photography is a must-have resource.

No Starch Press
ISBN 10:
ISBN 13:
EPUB, 78.21 MB
Download (epub, 78.21 MB)
Conversion to is in progress
Conversion to is failed

Most frequent terms

I worship this website..
14 July 2019 (07:03) 
File is damaged. It couldn't be opened.
12 June 2020 (22:24) 

To post a review, please sign in or sign up
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.

Die Logik des "Kapitals" von Karl Marx

PDF, 17.44 MB
0 / 0

Who Knew?: Questions That Will Make You Think Again

EPUB, 59.65 MB
0 / 0

Master Your Digital Camera and Capture That Perfect Photo


San Francisco

Understanding Photography: Master Your Digital Camera and Capture That Perfect Photo. Copyright © 2019 by Sean T. McHugh.

All rights reserved. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage or retrieval system, without the prior written permission of the copyright owner and the publisher.

ISBN-10: 1-59327-894-2

ISBN-13: 978-1-59327-894-6

Publisher: William Pollock

Production Editor: Laurel Chun

Cover and Interior Design: Mimi Heft

Cover Photography: Sean T. McHugh

Developmental Editor: Annie Choi

Technical Reviewers: Jeff Carlson and Richard Lynch

Copyeditor: Barton D. Reed

Proofreader: Paula L. Fleming

Compositor: Danielle Foster

For information on distribution, translations, or bulk sales, please contact No Starch Press, Inc. directly:

No Starch Press, Inc.

245 8th Street, San Francisco, CA 94103

phone: 415.863.9900;

Library of Congress Cataloging-in-Publication Data:

Names: McHugh, Sean (Sean T.), author.

Title: Understanding photography : master your digital camera and capture

that perfect photo / Sean T. McHugh.

Description: San Francisco : No Starch Press, Inc., [2019].

Identifiers: LCCN 2018036966 (print) | LCCN 2018044393 (ebook) | ISBN

9781593278953 (epub) | ISBN 1593278950 (epub) | ISBN 9781593278946 | ISBN

9781593278946 (print) | ISBN 1593278942 (print) | ISBN

9781593278953 (ebook) | ISBN 1593278950 (ebook)

No Starch Press and the No Starch Press logo are registered trademarks of No Starch Press, Inc. Other product and company names mentioned herein may be the trademarks of their respective owners. Rather than use a trademark symbol with every occurrence of a trademarked name, we are using the names only in an editorial fashion and to the benefit of the trademark owner, with no intent; ion of infringement of the trademark.

The information in this book is distributed on an “As Is” basis, without warranty. While every precaution has been taken in the preparation of this work, neither the author nor No Starch Press, Inc. shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in it.

To everyone who has helped a friend or partner follow their passions, even though that journey didn’t always have a clear destination

































































Sean T. McHugh is the founder and owner of Cambridge in Colour (, an online learning community for photographers, and he was formerly head of product at a leading digital camera company. A scientist by training, he is fascinated by the interaction between technological developments and the range of creative options available to photographers. He has conducted several student workshops on general camera and DSLR technique.


This book would not have been possible without all the feedback and careful reading from photographers, specialists, and many others who have visited over more than a decade. Thank you.

Thanks to Bill Pollock, Annie Choi, Laurel Chun, Mimi Heft, Bart Reed, Paula Fleming, and Danielle Foster for editing and producing this book. Thanks also to Jeff Carlson and Richard Lynch for their technical review.


VISION IS PERHAPS THE SENSE we most associate with reality. We are therefore more conscious of what we see than how we see, but that all has to change when you learn photography. You have to augment your own eyesight with a deeper understanding of what is both technically possible and creatively desirable. Such an understanding is built not by following instructions for a specific camera model but by becoming fluent in the language of photography.


Before you dig in, read through this introduction to start building a foundational understanding of light and its interaction with the key components of a photographic system. Each of these topics is discussed in more depth in subsequent chapters.


Let’s start with the concept of light. After all, photography is essentially the process of capturing and recording light to produce an image. What we commonly refer to as light is just a particular type of electromagnetic radiation that spans a wide range of phenomena, from X-rays to microwaves to radio waves. The human eye only sees a narrow range called the visible spectrum, which fits between ultraviolet and infrared radiation and represents all the colors in a rainbow (see FIGURE 1).

FIGURE 1 Qualitative depiction of the visible spectrum among other invisible wavelengths in the electromagnetic spectrum (not to scale)

If you were creating a painting, the visible spectrum would represent all the colors of paint you could have in your palette. Every color and shade that you have ever seen, from the most intense sunsets to the most subtle nightscapes, is some combination of light from this spectrum. Our eyes perceive this color using a combination of three different color-sensing cells, each of which has peak sensitivity for a different region in the visible spectrum. Contrary to common belief, color is actually a sensation just like taste and smell. Our sensitivity to color is uniquely human, and it’s a potent photographic tool for evoking emotions in the viewer.


The first time light interacts with your photographic system is usually when it hits the front of your camera’s lens. Your choice of lens is one of the first creative choices that irreversibly sculpts the recorded image (FIGURE 2, left). If the colors of light are your paint, then the lens is your paint brush.

In reality, lenses are far more powerful than that. For example, they can influence the viewer’s sense of scale and depth, control the angle of view, or isolate a subject against an otherwise busy background. Lenses often receive less attention than camera bodies, but a good lens often outlives a good camera and thus deserves at least as much attention. Your lens choice can also have more of an impact on image quality than anything else in your photographic system.

The camera’s lens effectively takes an angle of view plus a subject of focus and projects that onto the camera’s sensor. Although often only the outer portion of glass is visible, modern lenses are actually composed of a series of carefully engineered, specialty glass elements that precisely control incoming light rays and focus those with maximal fidelity (FIGURE 2, right). For this reason, photographers often also refer to having a good lens as having “good glass.”

FIGURE 2 Digital SLR camera with a variety of lenses (left). Cross section of a lens showing internal glass elements (right).


After light passes through your lens, it hits the digital sensor (FIGURE 3, left), which is what receives light and converts it into an electrical signal (with film cameras, the sensor is instead an exposed rectangular frame on the film strip but is otherwise very similar physically). If light is your paint and the lens is your paint brush, then the sensor can be thought of as your canvas. However, just as with the lens, the sensor determines far more than just the window onto your scene; it controls how much detail you’ll be able to extract, what lenses you’ll be able to use and the effect they’ll have, and whether dramatic lighting can be fully recorded—from the deepest shadows to the brightest highlights. In today’s highly competitive camera market that features cameras with well-tailored user interfaces and ergonomics, a good sensor is often what determines a good camera.

FIGURE 3 Camera sensor exposed underneath a digital SLR lens mount (left). Qualitative illustration of red, green, and blue photosites within a tiny portion of this sensor if magnified 1000× (right).

Most camera sensors try to approximate the result of the three color-sensing cells in our eyes with their own variety of color-sensing elements, or photosites—usually of the red, green, and blue variety. The most common is the Bayer array, which includes alternating rows of red/green and green/blue photosites (FIGURE 3, right). Data from each of these photosites is later combined to create full-color images by both your camera and photo-editing software. The lens actually projects a circular image onto the sensor, but due to typical display and print formats, the sensor only records a central rectangular window from this larger imaging circle.


The camera body is what mechanically ties everything together, but just as importantly, it also electrically and optically coordinates all the components in your photographic system (FIGURE 4). Unlike the other components, though, the camera body has no close analogue in a painter’s tool kit, except perhaps an extremely mobile canvas and easel. Portability is therefore where modern photography truly shines; you can often take your camera with you everywhere, allowing you to think about your everyday surroundings in a whole new way.

FIGURE 4 The outside of a digital SLR camera (top) along with an illustration of the key internal components (bottom)

The camera body primarily controls or assists with the following: how much light is received, where and how to focus, how image information is stylized and recorded, and how an image is previewed. Camera bodies also come in many varieties, depending on your preference for image quality, mobility, price, and lens compatibility. Common types include smartphone or compact cameras, DSLR (digital single-lens reflex) cameras, and mirrorless cameras to name a few (FIGURE 5). This book focuses on DSLR and mirrorless cameras, but you can apply most of the concepts discussed here universally.

Other important but optional components of a photographic system include tripods for sharp, long exposures; lens filters for specialty shots; and lighting equipment for portraits and other carefully controlled shots. These optional components are discussed in Chapters 4 through 6 and Chapter 8.

FIGURE 5 Examples of a smartphone, compact camera, mirrorless camera, and digital SLR camera (clockwise from upper left)


The components of photography covered here comprise what you need to physically take a photograph, but we’ve yet to discuss the most important component: the artist. As you practice and better understand your equipment, you’ll learn how to make creative decisions about the images you create—from choosing a lens that accentuates your stylistic goals to selecting camera settings for exposing under challenging lighting. Eventually, you’ll be thinking more artistically than technically because technique will become second nature.

To make creative decisions with confidence, though, you need to understand not only which camera settings are often used depending on subject matter but also, more importantly, why those settings are so frequently recommended. This book is therefore for photographic enthusiasts who want to take the time to build that educational foundation by deepening their overall understanding of light and cameras. The book focuses on everything prior to image processing—from the various ways you could record a subject all the way to maximizing image quality during an exposure.


Photography is often considered simple to grasp but complex to master, so you can use this book either to absorb everything from start to finish or to explore specific sections and fill in the gaps in your current knowledge. You can start with the basics and go straight to shooting or, if you feel ready, skip to the more advanced topics right away. Regardless of your experience level, photography offers hobbyists virtually limitless potential to hone their craft.

Here’s an overview of what you’ll find in each chapter:

Chapter 1: Basic Concepts in Photography introduces exposure, camera metering, and depth of field.

Chapter 2: Digital Image Characteristics introduces bit depth, digital sensors, histograms, and image noise.

Chapter 3: Understanding Camera Lenses explains the key lens specifications plus how to use wide-angle and telephoto lenses in particular.

Chapter 4: Camera Types and Tripods covers compact, mirrorless, and DSLR camera types, plus how to choose the right tripod.

Chapter 5: Lens Filters shows when to use ultraviolet (UV), color, polarizing, neutral density (ND), and graduated neutral density (GND) filter types.

Chapter 6: Using Flash to Enhance Subject Illumination introduces common flash equipment and exposure settings.

Chapter 7: Working with Natural Light and Weather shows you how time of day and atmospheric conditions affect the qualities of light.

Chapter 8: Introduction to Portrait Lighting shows you how the size, position, and relative intensity of single and dual light sources affect portraits.

Chapter 9: Other Shooting Techniques discusses camera shake, autofocus, creative shutter speeds, and the rule of thirds.

Appendix: Cleaning Camera Sensors shares techniques for cleaning your camera sensor.

Concepts such as light, exposure, and lens choice should all become part of your visual intuition; only then will you be able to push your equipment and creativity to their full potential.

Let’s get started.


Basic Concepts in Photography

PHOTOGRAPHY BECOMES MOST ENJOYABLE when you get comfortable with your camera. Achieving that level of comfort is less about remembering a list of settings and more about building the right understanding of the core concepts in photography.

Just like people who know how to ride a bike can focus more on where they’re going than on turning the pedals or switching gears, when you become comfortable with your camera, you can focus more on capturing evocative imagery than on what settings are necessary to achieve those photos. In this chapter, we start that process by covering the key concepts and terminology in photography, which include the following topics:

Exposure Aperture, ISO speed, and shutter speed are the three core controls that manipulate exposure. We discuss their technical impact on light and imagery as well as their limitations and trade-offs.

Camera metering The engine that assesses light and exposure. We look at typical settings as well as some scenarios where camera metering comes in handy.

Depth of field An important characteristic that influences our perception of space. We discuss how we quantify depth of field and which settings affect it.

Each high-level topic applies equally to both digital and traditional film photography. Let’s begin by discussing exposure.


Exposure determines how light or dark an image will appear when it has been captured by your camera. Learning to control exposure is an essential part of developing your own intuition for photography.

Exposing a photograph is like collecting rain in a bucket. Although the rate of rainfall is uncontrollable, three factors remain under your control: the bucket’s width, how long you leave the bucket in the rain, and the quantity of rain you need to collect. You don’t want to collect too little (known as underexposure), but you also don’t want to collect too much (known as overexposure). The key is that there are many different combinations of width, time, and quantity that can collect the amount of rainfall you want. For example, to get the same quantity of water, you can get away with less time in the rain if you pick a bucket that’s really wide. Alternatively, for the same amount of time in the rain, a narrower bucket works fine if you can get by with less water.


Just as collecting rain in a bucket is controlled by the bucket’s width, the duration of its exposure to the rain, and the quantity of rain desired, collecting light for exposure is determined by three camera settings: shutter speed, aperture, and ISO speed. Together, these settings are known as the exposure triangle. Let’s take a closer look at how these control exposure:

Shutter speed Controls the duration of the exposure

Aperture Controls the area through which light can enter your camera

ISO speed Controls the sensitivity of your camera’s sensor to a given amount of light

FIGURE 1-1 The exposure triangle

You can use many combinations of these three settings to achieve the same exposure. The key, however, is deciding which trade-offs to make, since each setting also influences other image properties. For example, aperture affects depth of field (which we’ll discuss in “Understanding Depth of Field” on page 14), shutter speed affects motion blur, and ISO speed affects image noise. FIGURE 1-1 illustrates the settings that make up the exposure triangle and the different image properties affected by each setting.

The next sections explain how these settings are quantified, how each of the three exposure controls affect the image, and how you can control these settings with exposure modes.


Let’s begin by exploring how shutter speed affects your image. A camera’s shutter determines when the camera sensor will be open or closed to incoming light from the camera lens. The shutter speed, or exposure time, refers to how long this light is permitted to enter the camera. Shutter speed and exposure time are often used interchangeably and refer to the same concept. A faster shutter speed means a shorter exposure time.


Shutter speed’s influence on exposure is the simplest of the three camera settings: it correlates 1:1 with the amount of light entering the camera. For example, when the exposure time doubles, the amount of light entering the camera doubles. It’s also the setting with the widest range of possibilities. TABLE 1-1 illustrates the range of shutter speed settings and provides examples of what each can achieve.

TABLE 1-1 Range of Shutter Speeds



1 to 30+ seconds

To take specialty night and low-light photos on a tripod

1/2 to 2 seconds

To add a silky look to flowing water landscape photos on a tripod for enhanced depth of field

1/30 to 1/2 second

To add motion blur to the background of moving-subject, carefully taken, handheld photos with stabilization

1/250 to 1/50 second

To take typical handheld photos without substantial zoom

1/500 to 1/250 second

To freeze everyday sports/action, in moving-subject, handheld photos with substantial zoom (telephoto lens)

1/8000 to 1/1000 second

To freeze extremely fast, up-close subject motion

Note that the range in shutter speeds spans a 100,000× ratio between the shortest exposure and longest exposure, enabling cameras with this capability to record a wide variety of subject motion.


Slow shutter speed is useful for blurring motion, as when capturing waterfalls or when experimenting with creative shots. FIGURE 1-2 uses a slow (1-second) shutter speed to blur the motion of the waterfall.

Most often, photographers try to avoid motion blurs using shutter speed. For example, a faster shutter speed can create sharper photos by reducing subject movement. FIGURE 1-3 is a picture taken with a faster (1/60-second) shutter speed. A fast shutter speed also helps minimize camera shake when taking handheld shots.

FIGURE 1-2 Slow shutter speed (blurs motion)

FIGURE 1-3 Fast shutter speed (freezes motion)

How do you know which shutter speed will give you a sharp handheld shot? With digital cameras, the best way to find out is to experiment and look at the results on your camera’s LCD (liquid crystal display) screen at full zoom. If a properly focused photo comes out blurred, you usually need to increase the shutter speed, keep your hands steadier, or use a camera tripod.


A camera’s aperture setting controls the width of the opening that lets light into your camera lens. We measure a camera’s aperture using an f-stop value, which can be counterintuitive because the area of the opening increases as the f-stop decreases. For example, when photographers say they’re “stopping down” or “opening up” their lens, they’re referring to increasing and decreasing the f-stop value, respectively. FIGURE 1-4 helps you visualize the area of the lens opening that corresponds to each f-stop value.

FIGURE 1-4 F-stop values and the corresponding aperture area


Every time the f-stop value halves, the light-collecting area quadruples. There’s a formula for this, but most photographers just memorize the f-stop numbers that correspond to each doubling or halving of light. TABLE 1-2 lists some aperture and shutter speed combinations that result in the same exposure.

TABLE 1-2 Example of Aperture Settings and Shutter Speed Combinations






1 second



1/2 second



1/4 second



1/8 second



1/15 second



1/30 second



1/60 second



1/125 second



1/250 second

NOTE: These sample shutter speeds are approximations of the Relative Light column based on typically available camera settings.

You can see that as the f-stop value decreases (allowing more light in), the shutter speed has to be faster to compensate for the amount of light passing through the lens. Shutter speed values don’t always come in increments of exactly double or half a shutter speed, but they’re usually close enough that the difference is negligible.

FIGURE 1-5 Using a wide-aperture, low f-stop value (f/2.0) for a shallow depth of field

FIGURE 1-6 Using a narrow-aperture, high f-stop value (f/16) for an expansive depth of field

The f-stop numbers in TABLE 1-2 are all standard options in any camera, although most cameras also allow finer adjustments in 1/3- and 1/2-stop increments, such as f/3.2 and f/6.3. The range of values may also vary from camera to camera or lens to lens. For example, a compact camera might have an available range of f/2.8 to f/8.0, whereas a digital SLR (single-lens reflex) camera might have a range of f/1.4 to f/32 with a portrait lens. A narrow aperture range usually isn’t a big problem, but a greater range gives you more creative flexibility.


A camera’s aperture setting affects the distance from the lens to where objects appear acceptably sharp, both in front of and behind where the camera is focusing. This range of sharpness is commonly referred to as the depth of field, and it’s an important creative tool in portraiture for isolating a subject from its surroundings by making the subject look sharper than the backdrop. It can also maximize detail throughout, as with an expansive landscape vista.

Lower f-stop values create a shallower depth of field, whereas higher f-stop values create a more expansive depth of field. For example, with many cameras, f/2.8 and lower are common settings when a shallow depth of field is desired, whereas f/8.0 and higher are used when sharpness throughout is key.

FIGURE 1-5 shows an example of a picture taken with wide aperture settings to achieve a shallow depth of field. You’ll notice that the flower in the foreground, which the camera focuses on, is sharper than the rest of the objects. FIGURE 1-6 shows an example of the opposite effect.

In this case, a narrow aperture creates a wider depth of field to bring all objects into relatively sharp focus. You can also control depth of field using settings other than aperture. We’ll take a deeper look at other considerations affecting depth of field as well as how depth of field is defined and quantified in “Understanding Depth of Field” on page 14.


The ISO speed determines how sensitive the camera is to incoming light. Similar to shutter speed, it also correlates 1:1 with how much the exposure increases or decreases. However, unlike aperture and shutter speed, a lower ISO speed is almost always desirable since higher ISO speeds dramatically increase image noise, or fine-scale variations of color or brightness in the image that are not present in the actual scene. Image noise is also called film grain in traditional film photography. FIGURE 1-7 shows the relationship between image noise and ISO speed and what image noise looks like.

FIGURE 1-7 ISO speed and image noise

You can see that a low ISO speed results in less image noise, whereas a high ISO speed results in more image noise. You’d usually increase ISO from its base or default value only if you can’t otherwise obtain the desired aperture and shutter speed. For example, you might want to increase the ISO speed to achieve both a fast shutter speed with moving subjects and a more expansive depth of field, or to be able to take a sharp handheld shot in low light when your lens is already at its widest aperture setting.

Common ISO speeds include 100, 200, 400, and 800, although many cameras also permit lower or higher values. With compact cameras, an ISO speed in the range of 50–400 generally produces acceptably low image noise, whereas with digital SLR cameras, a range of 100–3200 (or even higher) is often acceptable.

FIGURE 1-8 Typical camera exposure modes (including pre-set modes)

FIGURE 1-9 Using S or Tv mode to increase shutter speed


Most digital cameras have one of the following standardized exposure modes: Auto (), Program (P), Aperture Priority (A or Av), Shutter Priority (S or Tv), Manual (M), and Bulb (B). Av, Tv, and P modes are often called creative modes or auto-exposure (AE) modes. FIGURE 1-8 shows some exposure modes you would see on a typical camera.

Each mode influences how aperture, ISO speed, and shutter speed values are chosen for a given exposure. Some modes attempt to pick all three values for you, whereas others let you specify one setting and the camera picks the other two when possible. TABLE 1-3 describes how each mode determines exposure.

Auto-exposure mode doesn’t allow for much creative control because it doesn’t let you prioritize which camera settings are most important for achieving your artistic intent. For example, for an action shot of a kayaker, like FIGURE 1-9, you might want to use S or Tv mode because achieving a faster shutter speed is likely more important than the scene’s depth of field. Similarly, for a static landscape shot, you might want to use A or Av mode because achieving an expansive depth of field is likely more important than the exposure duration.

In addition, the camera may also have several pre-set modes. The most common pre-set modes include landscape, portrait, sports, and night modes. The symbols used for each mode vary slightly from camera to camera but will likely appear similar to those shown in TABLE 1-4, which gives descriptions of the most common pre-set modes you’ll find on a camera.

TABLE 1-3 Descriptions of Common Exposure Modes



Auto ()

The camera automatically selects all exposure settings.

Program (P)

The camera automatically selects aperture and shutter speed. You choose a corresponding ISO speed and exposure compensation. With some cameras, P can also act as a hybrid of the Av and Tv modes.

Aperture Priority (Av or A)

You specify the aperture and ISO speed. The camera’s metering (discussed in the next section) determines the corresponding shutter speed.

Shutter Priority (Tv or S)

You specify the shutter speed and ISO speed. The camera’s metering determines the corresponding aperture.

Manual (M)

You specify the aperture, ISO speed, and shutter speed—regardless of whether these values lead to a correct exposure.

Bulb (B)

You specify the aperture and ISO speed. The shutter speed is determined by a remote release switch or, depending on the camera, by double-pressing or holding the shutter button. Useful for exposures longer than 30 seconds.

TABLE 1-4 Pre-set Exposure Modes




The camera tries to pick the lowest f-stop value possible for a given exposure. This ensures the shallowest possible depth of field.


The camera tries to pick a high f-stop to ensure a deep depth of field. Compact cameras also often set their focus distance to distant objects or infinity.


The camera tries to achieve as fast a shutter speed as possible for a given exposure (ideally 1/250 second or faster). In addition to using a low f-stop, the camera may also achieve a faster shutter speed by increasing the ISO speed to more than would be acceptable in portrait mode.


The camera permits shutter speeds longer than ordinarily allowed for handheld shots and increases the ISO speed to near its maximum available value. However, for some cameras, this setting means that a flash is used for the foreground and that a long shutter speed and high ISO speed are used to expose the background. Check your camera’s instruction manual for any unique characteristics.

Some of the modes can also control camera settings unrelated to exposure, although this varies from camera to camera. These additional settings include the autofocus points, metering mode, and autofocus modes, among others.

Keep in mind that most of the modes rely on the camera’s metering system to achieve proper exposure. The metering system is not foolproof, however. It’s a good idea to be aware of what might go awry and what you can do to compensate for such exposure errors. In the next section, we discuss camera metering in more detail.


Knowing how your camera meters light is critical for achieving consistent and accurate exposures. The term metering refers to what was traditionally performed by a separate light meter, a device used to determine the proper shutter speed and aperture by measuring the amount of light available. In-camera metering options include spot, evaluative or matrix, and center-weighted metering. Each has its advantages and disadvantages, depending on the type of subject and distribution of lighting. Next, we discuss the difference between incident and reflected light to see how in-camera light meters determine proper exposure.


Incident light is the amount of light hitting the subject, and it’s also a measurement that correlates directly with exposure. Reflected light refers to the amount of incident light that reflects back to the camera after hitting the subject and therefore only indirectly measures incident light. Whereas handheld light meters measure incident light, in-camera light meters can only measure reflected light. FIGURE 1-10 illustrates the difference between incident light and reflected light.

FIGURE 1-10 Incident vs. reflected light

If all objects reflected light in the same way, getting the right exposure would be simple. However, real-world subjects vary greatly in their reflectance. For this reason, in-camera metering is standardized based on the intensity of light reflected from an object that appears middle gray in tone. In fact, in-camera meters are designed for such middle-toned subjects. For example, if you fill the camera’s image frame with an object lighter or darker than middle gray, the camera’s metering will often incorrectly under- or overexpose the image, respectively. On the other hand, a handheld light meter results in the same exposure for any object given the same incident lighting because it measures incident light.

What constitutes middle gray? In the printing industry, it’s defined as the ink density that reflects 18 percent of incident light. FIGURE 1-11 shows approximations of 18 percent luminance using different colors.

FIGURE 1-11 Approximations of 18 percent luminance

Cameras adhere to a different standard than the printing industry. Whereas each camera defines middle gray slightly differently, it’s generally defined as a tone ranging between 10 and 18 percent reflectance. Metering off a subject that reflects more or less light than this may cause your camera’s metering algorithm to go awry, resulting in either under- or overexposure.


Gray cards are more often used for white balance than for exposure. For studio work, many photographers use a light meter because it’s far more accurate. A gray card has 10 to 18 percent reflectance (as opposed to 50 percent reflectance) to account for how our human visual system works. Because we perceive light intensity logarithmically as opposed to linearly, our visual system requires much less than half the reflectance for an object to be perceived as half as bright.


To accurately expose a greater range of subject lighting and reflectance combinations, most cameras have several built-in metering options. Each option works by assigning a relative weighting to different light regions within the image. Those regions with a higher weighting are considered more reliable and thus contribute more to the final exposure calculation. FIGURE 1-12 shows what different metering options might look like.

FIGURE 1-12 Partial and spot areas shown for 13.5 percent and 3.8 percent of the picture area, respectively, based on a Canon 1-series DSLR camera.

As you can see in FIGURE 1-12, the whitest regions have a higher weighting and contribute most to the exposure calculation, whereas black areas don’t. This means that subject matter placed within the whitest regions will appear closer to the intended exposure than subject matter placed within the darker regions. Each of these metering diagrams can also be off-center, depending on the metering options and autofocus point used.

More sophisticated metering algorithms like evaluative, zone, and matrix go beyond just using a regional map. These are usually the default when you set the camera to auto-exposure. These settings work by dividing the image into numerous subsections. Each subsection is then analyzed in terms of its relative location, light intensity, or color. The location of the autofocus point and orientation of the camera (portrait versus landscape) may also contribute to metering calculations.


Partial and spot metering give the photographer far more control over exposure than the other settings, but these are also more difficult to use, at least initially, because the photographer has to be more careful about which portions of the scene are used for metering. Partial and spot metering work very similarly, but partial metering is based on a larger fraction of the frame than spot metering (as depicted in FIGURE 1-12), although the exact percentages vary based on camera brand and model. Partial and spot metering are often useful when there’s a relatively small object in your scene that you need perfectly exposed or know will provide the closest match to middle gray, as shown in FIGURE 1-13.

FIGURE 1-13 A situation where you might want to use off-center partial or spot metering

One of the most common applications of partial metering is in portraiture when the subject is backlit. Partially metering off the face can help avoid an exposure that makes the subject appear as an underexposed silhouette against the bright background. On the other hand, the shade of your subject’s skin may lead to unintended exposure if it’s too far from middle gray reflectance—although this is less likely with backlighting.

Spot metering is used less often because its metering area is very small and thus quite specific. This can be an advantage when you’re unsure of your subject’s reflectance and have a specially designed gray card or other standardized object from which to meter off.

Partial and spot metering are also useful for creative exposures or when the ambient lighting is unusual, like in FIGURE 1-14, which is an example of a photo taken with partial metering. In this case, the photographer meters off the directly lit stone below the sky opening to achieve a good balance between brightness in the sky and darkness on the rocks.

FIGURE 1-14 A photo taken using partial metering


Center-weighted metering is when all regions of the frame contribute to the exposure calculation, but subject matter closer to the center of the frame contributes more to this calculation than does subject matter farther from the center. This was once a very common default setting in cameras because it coped well with a bright sky above a darker landscape. Nowadays, evaluative and matrix metering allow more flexibility, and partial and spot metering provide more specificity.

On the other hand, center-weighted metering produces very predictable results, whereas matrix and evaluative metering modes have complicated algorithms whose results are harder to predict. For this reason, some still prefer center-weighted as the default metering mode.

Note that there is no one correct way to achieve an exposure, since that is both a technical and a creative decision. Metering is just a technical tool that gives the photographer more control and predictability over their creative intent with the image.


With any of the previously discussed metering modes, you can use a feature called exposure compensation (EC). When you activate EC, the metering calculation works normally, but the final exposure target gets compensated by the EC value. This allows for manual corrections if a metering mode is consistently under- or overexposing. Most cameras allow up to two stops of EC, where each stop provides either a doubling or halving of light compared to what the metering mode would have done without compensation. The default setting of zero means no compensation will be applied.

FIGURE 1-15 Example of a high reflectance scene requiring high positive exposure compensation

EC is ideal for correcting in-camera metering errors caused by the subject’s reflectivity. For example, subjects in the snow with high reflectivity always require around +1 exposure compensation, whereas dark gray (unreflective) subjects may require negative compensation. FIGURE 1-15 is an example of a highly reflective scene requiring high positive exposure compensation to avoid appearing too dark.

As you can see, exposure compensation is handy for countering the way an in-camera light meter underexposes a subject like a white owl in the snow, due to its high reflectivity.


This section covers the technical aspects of depth of field to give you a deeper understanding of how it’s measured and defined. Feel free to skim through this section if you’d rather focus on the concepts and techniques to control depth of field, which we’ll cover in “Controlling Depth of Field” on page 16.

Depth of field refers to the range of distance that appears acceptably sharp within an image. This distance varies depending on camera type, aperture, and focusing distance. Print size and viewing distance can also influence our perception of depth of field.

The depth of field doesn’t abruptly change from sharp to blurry but instead gradually transitions. In fact, when you focus your camera on a subject, everything immediately in front of or in back of the focusing distance begins to lose sharpness—even if this transition is not perceived by your eyes or by the resolution of the camera.


Because there’s no single critical point of transition, a more rigorous term called the circle of confusion is used to define the depth of field. The circle of confusion is defined by how much a point needs to be blurred so it’s no longer perceived as sharp. When the circle of confusion becomes blurry enough to be perceptible to our eyes, this region is said to be outside the depth of field and thus no longer acceptably sharp. FIGURE 1-16 illustrates a circle of confusion and its relationship to the depth of field.

FIGURE 1-16 Diagram illustrating the circle of confusion

In this diagram, the left side of the camera lens represents light from your subject, whereas the right side of the lens represents the image created by that light after it has passed through the lens and entered your camera. As you can see, the blue lines represent the light from subject matter that coincides with the focal plane, which is the distance at which the subject is in sharpest focus. The purple and green dots on either side of the focal plane represent the closest and farthest distances of acceptable sharpness.

As you can see in FIGURE 1-17, a point light source can have various degrees of blur when recorded by your camera, and the threshold by which such blur is no longer deemed acceptably sharp is what defines the circle of confusion. Note that the circle of confusion has been exaggerated for the sake of demonstration; in reality, this would occupy only a tiny fraction of the camera sensor’s area.

FIGURE 1-17 Circles of confusion

When does the circle of confusion become perceptible to our eyes? An acceptably sharp circle of confusion is commonly defined as one that would go unnoticed when enlarged to a standard 8 × 10 inch print and observed from a standard viewing distance of about 1 foot.

FIGURE 1-18 Lens depth of field markers

At this viewing distance and print size, camera manufacturers assume a circle of confusion is negligible if it’s no larger than 0.01 inches. In other words, anything blurred by more than 0.01 inches would appear blurry. Camera manufacturers use the 0.01 inch standard when providing lens depth-of-field markers. FIGURE 1-18 shows an example of depth-of-field markers on a 50 mm lens.

In reality, a person with 20/20 vision or better can distinguish features much smaller than 0.01 inches. In other words, the sharpness standard of a camera is three times worse than someone with 20/20 vision! This means that the circle of confusion has to be even smaller to achieve acceptable sharpness throughout the image.

A different maximum circle of confusion also applies for each print size and viewing distance combination. In the earlier example of blurred dots, the circle of confusion is actually smaller than the resolution of your screen (or printer) for the two dots on either side of the focal point, and so these are considered within the depth of field. Alternatively, the depth of field can be based on when the circle of confusion becomes larger than the size of your digital camera’s pixels.

Note that depth of field only determines when a subject is deemed acceptably sharp; it doesn’t describe what happens to regions once they become out of focus. These regions are also called bokeh (pronounced boh-keh) from the Japanese word meaning “blur.” Two images with identical depths of field may have significantly different bokeh, depending on the shape of the lens diaphragm. In fact, the circle of confusion isn’t actually a circle but only approximated as such when it’s very small. When it becomes large, most lenses render it as a polygon with five to eight sides.


Although print size and viewing distance influence how large the circle of confusion appears to our eyes, aperture and focusing distance are the two main factors that determine how big the circle of confusion will be on your camera’s sensor. Larger apertures (smaller f-stop number) and closer focusing distances produce a shallower depth of field. In FIGURE 1-19, all three photos have the same focusing distance but vary in aperture setting.

FIGURE 1-19 Images taken with a 200 mm lens at the same focus distance with varying apertures

As you can see from this example, f/2.8 creates a photo with the least depth of field since the background is the most blurred relative to the foreground, whereas f/5.6 and f/8.0 create a progressively sharper background. In all three photos, the focus was placed on the foreground statue.


Your choice of lens focal length doesn’t influence depth of field, contrary to common belief. The only factors that have a substantial influence on depth of field are aperture, subject magnification, and the size of your camera’s image sensor. All of these topics will be discussed in more depth in subsequent chapters.

If sharpness throughout the image is the goal, you might be wondering why you can’t just always use the smallest aperture to achieve the best possible depth of field. Other than potentially requiring prohibitively long shutter speeds without a camera tripod, too small an aperture softens the image, even where you’re focusing, due to an effect called diffraction. Diffraction becomes a more limiting factor than depth of field as the aperture gets smaller. This is why pinhole cameras have limited resolution despite their extreme depth of field.


In this chapter, you learned how aperture, ISO speed, and shutter speed affect exposure. You also learned about the standard camera exposure modes you can use to control exposure in your image. You learned about different metering options that give you even more control over exposure. Finally, you explored the various factors that affect depth of field in your image.

In the next chapter, you’ll learn about the unique characteristics of a digital image so you can better interpret these as a photographer.


Digital Image Characteristics

IN THIS CHAPTER, YOU’LL FAMILIARIZE yourself with the unique characteristics of digital imagery so you can make the most of your images. First, you’ll learn how the digital color palette is quantified and when it has a visual impact, which you’ll need to know in order to understand bit depth. Then you’ll explore how digital camera sensors convert light and color into discrete pixels.

The last two sections of this chapter cover how to interpret image histograms for more predictable exposures, as well as the different types of image noise and ways to minimize it.


Bit depth quantifies the number of unique colors in an image’s color palette in terms of the zeros and ones, or bits, we use to specify each color. This doesn’t mean that an image necessarily uses all of these colors, but it does mean that the palette can specify colors with a high level of precision.

In a grayscale image, for example, the bit depth quantifies how many unique shades of gray are available. In other words, a higher bit depth means that more colors or shades can be encoded because more combinations of zeros and ones are available to represent the intensity of each color. We use a grayscale example here because the way we perceive intensity in color images is much more complex.


Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue. Each primary color is often referred to as a color channel and can have any range of intensity values specified by its bit depth. The bit depth for each primary color is called the bits per channel. The bits per pixel (bpp) refers to the sum of the bits in all three color channels and represents the total colors available at each pixel.

Confusion arises frequently regarding color images because it may be unclear whether a posted number refers to the bits per pixel or bits per channel. Therefore, using bpp to specify the unit of measurement helps distinguish these two terms.

For example, most color images you take with digital cameras have 8 bits per channel, which means that they can use a total of eight 0s and 1s. This allows for 28 (or 256) different combinations, which translate to 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 (or 16,777,216) different colors, or true color. Combining red, green, and blue at each pixel in this way is referred to as 24 bits per pixel because each pixel is composed of three 8-bit color channels. We can generalize the number of colors available for any x-bit image with the expression 2x, where x refers to the bits per pixel, or 23x, where x refers to the bits per channel.

TABLE 2-1 illustrates different image types in terms of bit depth, total colors available, and common names. Many of the lower bit depths were only important with early computers; nowadays, most images are 24 bpp or higher.

TABLE 2-1 Comparing the Bit Depth of Different Image Types


















XGA, high color



SVGA, true color


16,777,216 + transparency


281 trillion


Note how FIGURE 2-1 changes when the bit depth is reduced. The difference between 24 bpp and 16 bpp may look subtle, but will be clearly visible on a monitor if you have it set to true color or higher (24 or 32 bpp).


Although the concept of bit depth may at first seem needlessly technical, understanding when to use high- versus low-bit depth images has important practical applications. Key tips include:

The human eye can discern only about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the intended purpose is for viewing only. On the other hand, images with more than 24 bpp are still quite useful because they hold up better under post-processing.

You can get undesirable color gradations in images with fewer than 8 bits per color channel, as shown in FIGURE 2-2. This effect is commonly referred to as posterization.

The available bit depth settings depend on the file type. Standard JPEG and TIFF files can use only 8 and 16 bits per channel, respectively.

FIGURE 2-1 Visual depiction of 8 bpp, 16 bpp, and 24 bpp using rainbow color gradients

FIGURE 2-2 A limited palette of 256 colors results in a banded appearance called posterization.

FIGURE 2-3 A digital sensor with millions of imperceptible color filters


A digital camera uses a sensor array of millions of tiny pixels (see FIGURE 2-3) to produce the final image. When you press your camera’s shutter button and the exposure begins, each of these pixels has a cavity called a photosite that is uncovered to collect and store photons.

After the exposure finishes, the camera closes each of these photosites and then tries to assess how many photons fell into each. The relative quantity of photons in each cavity is then sorted into various intensity levels, whose precision is determined by bit depth (0–255 levels for an 8-bit image). FIGURE 2-4 illustrates how these cavities collect photons.

FIGURE 2-4 Using cavities to collect photons

The grid on the left represents the array of light-gathering photosites on your sensor, whereas the reservoirs shown on the right depict a zoomed in cross section of those same photosites. In FIGURE 2-4, each cavity is unable to distinguish how much of each color has fallen in, so the grid diagram illustrated here would only be able to create grayscale images.


To capture color images, each cavity has to have a filter placed over it that allows penetration of only a particular color of light. Virtually all current digital cameras can capture only one of the three primary colors in each cavity, so they discard roughly two-thirds of the incoming light. As a result, the camera has to approximate the other two primary colors to have information about all three colors at every pixel. The most common type of color filter array, called a Bayer array, is shown in FIGURE 2-5.

FIGURE 2-5 A Bayer array

As you can see, a Bayer array consists of alternating rows of red-green and green-blue filters (as shown in FIGURES 2-5 and 2-6). Notice that the Bayer array contains twice as many green as red or blue sensors. In fact, each primary color doesn’t receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Creating redundancy with green photosites in this way produces an image that appears less noisy and has finer detail than if each color were treated equally. This also explains why noise in the green channel is much less than for the other two primary colors, as you’ll learn later in the chapter in the discussion of image noise.

FIGURE 2-6 Full color versus Bayer array representations of an image

Not all digital cameras use a Bayer array. For example, the Foveon sensor is one example of a sensor type that captures all three colors at each pixel location. Other cameras may capture four colors in a similar array: red, green, blue, and emerald green. But a Bayer array remains by far the most common setup in digital camera sensors.


Bayer demosaicing is the process of translating a Bayer array of primary colors into a final image that contains full-color information at each pixel. How is this possible when the camera is unable to directly measure full color? One way of understanding this is to instead think of each 2×2 array of red, green, and blue as a single full-color cavity, as shown in FIGURE 2-7.

Although this 2×2 approach is sufficient for simple demosaicing, most cameras take additional steps to extract even more image detail. If the camera treated all the colors in each 2×2 array as having landed in the same place, then it would only be able to achieve half the resolution in both the horizontal and vertical directions.

On the other hand, if a camera computes the color using several overlapping 2×2 arrays, then it can achieve a higher resolution than would be possible with a single set of 2×2 arrays. FIGURE 2-8 shows how the camera combines overlapping 2×2 arrays to extract more image information.

FIGURE 2-7 Bayer demosaicing using 2×2 arrays

FIGURE 2-8 Combining overlapping 2×2 arrays to get more image information

Note that we do not calculate image information at the very edges of the array because we assume the image continues in each direction. If these were actually the edges of the cavity array, then demosaicing calculations here would be less accurate, because there are no longer pixels on all sides. This effect is typically negligible, because we can easily crop out information at the very edges of an image.

Other demosaicing algorithms exist that can extract slightly more resolution, produce images that are less noisy, or adapt to best approximate the image at each location.


Images with pixel-scale detail can sometimes trick the demosaicing algorithm, producing an unrealistic-looking result. We refer to this as a digital artifact, which is any undesired or unintended alteration in data introduced in a digital process. The most common artifact in digital photography is moiré (pronounced “more-ay”), which appears as repeating patterns, color artifacts, or pixels arranged in an unrealistic maze-like pattern, as shown in FIGURES 2-9 and 2-10.

FIGURE 2-9 Image with pixel-scale details captured at 100 percent

FIGURE 2-10 Captured at 65 percent of the size as in Figure 2-9, resulting in more moiré

You can see moiré in all four squares in FIGURE 2-10 and also in the third square of FIGURE 2-9, where it is more subtle. Both maze-like and color artifacts can be seen in the third square of the downsized version. These artifacts depend on both the type of texture you’re trying to capture and the software you’re using to develop the digital camera’s files.

However, even if you use a theoretically perfect sensor that could capture and distinguish all colors at each photosite, moiré and other artifacts could still appear. This is an unavoidable consequence of any system that samples an otherwise continuous signal at discrete intervals or locations. For this reason, virtually every photographic digital sensor incorporates something called an optical low-pass filter (OLPF) or an anti-aliasing (AA) filter. This is typically a thin layer directly in front of the sensor, and it works by effectively blurring any potentially problematic details that are finer than the resolution of the sensor. However, an effective OLPF also marginally softens details coarser than the resolution of the sensor, thus slightly reducing the camera’s maximum resolving power. For this reason, cameras that are designed for astronomical or landscape photography may exclude the OLPF because for these applications, the slightly higher resolution is often deemed more important than a reduction of aliasing.


You might wonder why FIGURES 2-4 and 2-5 do not show the cavities placed directly next to each other. Real-world camera sensors do not have photosites that cover the entire surface of the sensor in order to accommodate other electronics. Digital cameras instead contain microlenses above each photosite to enhance their light-gathering ability. These lenses are analogous to funnels that direct photons into the photosite, as shown in FIGURE 2-11.

Without microlenses, the photons would go unused, as shown in FIGURE 2-12.

FIGURE 2-11 Microlenses direct photons into the photosites.

FIGURE 2-12 Without microlenses, some photons go unused.

Well-designed microlenses can improve the photon signal at each photosite and subsequently create images that have less noise for the same exposure time. Camera manufacturers have been able to use improvements in microlens design to reduce or maintain noise in the latest high-resolution cameras, despite the fact that these cameras have smaller photosites that squeeze more megapixels into the same sensor area.


Image histogram is probably the single most important concept you’ll need to understand when working with pictures from a digital camera. A histogram can tell you whether your image has been properly exposed, whether the lighting is harsh or flat, and what adjustments will work best. It will improve your skills not only on the computer during post-processing but also as a photographer.

Recall that each pixel in an image has a color produced by some combination of the primary colors red, green, and blue (RGB). Each of these colors can have a brightness value ranging from 0 to 255 for a digital image with a bit depth of 8 bits. An RGB histogram results when the computer scans through each of these RGB brightness values and counts how many are at each level, from 0 through 255. Although other types of histograms exist, all have the same basic layout as the example shown in FIGURE 2-13.

In this histogram, the horizontal axis represents an increasing tonal level from 0 to 255, whereas the vertical axis represents the relative count of pixels at each of those tonal levels. Shadows, midtones, and highlights represent tones in the darkest, middle, and brightest regions of the image, respectively.

FIGURE 2-13 An example of a histogram


The tonal range is the region where most of the brightness values are present. Tonal range can vary drastically from image to image, so developing an intuition for how numbers map to actual brightness values is often critical—both before and after the photo has been taken. Note that there is not a single ideal histogram that all images should mimic. Histograms merely represent the tonal range in the scene and what the photographer wishes to convey.

For example, the staircase image in FIGURE 2-14 contains a broad tonal range with markers to illustrate which regions in the image map to brightness levels on the histogram.

Highlights are within the window in the upper center, midtones are on the steps being hit by light, and shadows are toward the end of the staircase and where steps are not directly illuminated. Due to the relatively high fraction of shadows in the image, the histogram is higher toward the left than the right.

FIGURE 2-14 An image with broad tonal range

FIGURE 2-15 Example of a standard histogram composed primarily of midtones

But lighting is often not as varied as with FIGURE 2-14. Conditions of ordinary and even lighting, when combined with a properly exposed subject, usually produce a histogram that peaks in the center, gradually tapering off into the shadows and highlights, as in FIGURE 2-15.

With the exception of the direct sunlight reflecting off the top of the building and some windows, the boat scene is quite evenly lit. Most cameras will have no trouble automatically reproducing an image that has a histogram similar to the one shown here.


Although most cameras produce midtone-centric histograms when in automatic exposure mode, the distribution of brightness levels within a histogram also depends on the tonal range of the subject matter. Images where most of the tones occur in the shadows are called low key, whereas images where most of the tones are in the highlights are called high key. FIGURES 2-16 and 2-17 show examples of high-key and low-key images, respectively.

Before you take a photo, it’s useful to assess whether your subject matter qualifies as high or low key. Recall that because cameras measure reflected light, not incident light, they can only estimate subject illumination. These estimates frequently result in an image with average brightness whose histogram primarily features midtones.

Although this is usually acceptable, it isn’t always ideal. In fact, high- and low-key scenes frequently require the photographer to manually adjust the exposure relative to what the camera would do automatically. A good rule of thumb is to manually adjust the exposure whenever you want the average brightness in your image to appear brighter or darker than the midtones.

FIGURE 2-16 High-key histogram of an image with mostly highlights

FIGURE 2-17 Low-key histogram of an image with mostly shadow tones

FIGURE 2-18 Underexposed despite a central histogram

FIGURE 2-19 Overexposed despite a central histogram

In general, a camera will have trouble with auto-exposure whenever you want the average brightness in an image to appear brighter or darker than a central histogram. The dog and gate images shown in FIGURES 2-18 and 2-19, respectively, are common sources of auto-exposure error. Note that the central peak histogram is brought closer to the midtones in both cases of mistaken exposure.

As you can see here, the camera gets tricked into creating a central histogram, which renders the average brightness of an image in the midtones, even though the content of the image is primarily composed of brighter highlight tones. This creates an image that is muted and gray instead of bright and white, as it would appear in person.

Most digital cameras are better at reproducing low-key scenes accurately because they try to prevent any region from becoming so bright that it turns into solid white, regardless of how dark the rest of the image might become as a result. As long as your low-key image has a few bright highlights, the camera is less likely to be tricked into overexposing the image, as you can see in FIGURE 2-19. High-key scenes, on the other hand, often produce images that are significantly underexposed because the camera is still trying to avoid clipped highlights but has no reference for what should appear black.

Fortunately, underexposure is usually more forgiving than overexposure. For example, you can’t recover detail from a region that is so overexposed it becomes solid white. When this occurs, the overly exposed highlights are said to be clipped or blown. FIGURE 2-20 shows an example contrasting clipped highlights with unclipped highlights.

FIGURE 2-20 Clipped (left) versus unclipped detail (right)

As you can see, the clipped highlights on the floor in the left image lose detail from overexposure, whereas the unclipped highlights in the right image preserve more detail.

You can use the histogram to figure out whether clipping has occurred. For example, you’ll know that clipping has occurred if the highlights are pushed to the edge of the chart, as shown in FIGURE 2-21.

FIGURE 2-21 Substantially clipped highlights showing overexposure

Some clipping is usually acceptable in regions such as specular reflections on water or metal, when the sun is included in the frame, or when other bright sources of light are present. This is because our iris doesn’t adjust to concentrated regions of brightness in an image. In such cases, we don’t expect to see as many details in real life as in the image. But this would be less acceptable when we’re looking at broader regions of brightness, where our eyes can adjust to the level of brightness and perceive more details.

Ultimately, the amount of clipping present is up to the photographer and what they wish to convey in the image.


A histogram can also describe the amount of contrast, which measures the difference in brightness between light and dark areas in a scene. Both subject matter and lighting conditions can affect the level of contrast in your image. For example, photos taken in the fog will have low contrast, whereas those taken under strong daylight will have higher contrast. Broad histograms reflect a scene with significant contrast (see FIGURE 2-22), whereas narrow histograms reflect less contrast and images may appear flat or dull (see FIGURE 2-23). Contrast can have a significant visual impact on an image by emphasizing texture.

FIGURE 2-22 Wider histogram (higher contrast)

FIGURE 2-23 Narrower histogram (lower contrast)

The higher-contrast image of the water has deeper shadows and more pronounced highlights, thus creating a texture that pops out at the viewer. FIGURE 2-24 shows another high-contrast image.

FIGURE 2-24 Example of a scene with very high contrast

Contrast can also vary for different regions within the same image depending on both subject matter and lighting. For example, we can partition the earlier image of a boat into three separate regions, each with its own distinct histogram, as shown in FIGURE 2-25.

FIGURE 2-25 Histograms showing varying contrast for each region of the image

The upper region contains the most contrast of all three because the image is created from light that hasn’t been reflected off the surface of water. This produces deeper shadows underneath the boat and its ledges and stronger highlights in the upward-facing and directly exposed areas. The result is a very wide histogram.

The middle and bottom regions are produced entirely from diffuse, reflected light and thus have lower contrast, similar to what you would get when taking photographs in the fog. The bottom region has more contrast than the middle despite the smooth and monotonic blue sky because it contains a combination of shade and more intense sunlight. Conditions in the bottom region create more pronounced highlights but still lack the deep shadows of the top region. The sum of the histograms in all three regions creates the overall histogram shown previously in FIGURE 2-15.


Image noise is the digital equivalent of film grain that occurs with analog cameras. You can think of it as the subtle background hiss you may hear from your audio system at full volume. In digital images, noise is most apparent as random speckles on an otherwise smooth surface, and it can significantly degrade image quality.

However, you can use noise to impart an old-fashioned, grainy look that is reminiscent of early films, and you can also use it to improve perceived sharpness. Noise level changes depending on the sensitivity setting in the camera, the length of the exposure, the temperature, and even the camera model.


Some degree of noise is always present in any electronic device that transmits or receives a signal. With traditional televisions, this signal is broadcast and received at the antenna; with digital cameras, the signal is the light that hits the camera sensor.

Although noise is unavoidable, it can appear so small relative to the signal that it becomes effectively nonexistent. The signal-to-noise ratio (SNR) is therefore a useful and universal way of comparing the relative amounts of signal and noise for any electronic system. High and low SNR examples are illustrated in FIGURES 2-26 and 2-27, respectively.

FIGURE 2-26 High SNR example, where the camera produces a picture of the word SIGNAL against an otherwise smooth background

Even though FIGURE 2-26 is still quite noisy, the SNR is high enough to clearly distinguish the word SIGNAL from the background noise. FIGURE 2-27, on the other hand, has barely discernible letters because of its lower SNR.

FIGURE 2-27 Low SNR example, where the camera barely has enough SNR to distinguish SIGNAL against the background noise


The ISO speed is perhaps the most important camera setting influencing the SNR of your image. Recall that a camera’s ISO speed is a standard we use to describe its absolute sensitivity to light. ISO settings are usually listed as successive doublings, such as ISO 50, ISO 100, and ISO 200, where higher numbers represent greater sensitivity. You learned in the previous chapter that higher ISO speed increases image noise.

The ratio of two ISO numbers represents their relative sensitivity, meaning a photo at ISO 200 will take half as long to reach the same level of exposure as one taken at ISO 100 (all other factors being equal). ISO speed is the same concept and has the same units as ASA speed in film photography, where some film stocks are formulated with higher light sensitivity than others. You can amplify the image signal in the camera by using higher ISO speeds, resulting in progressively more noise.


Digital cameras produce three common types of noise: random noise, fixed-pattern noise, and banding noise. The three qualitative examples shown in FIGURE 2-28 display pronounced and isolating cases for each type of noise against an ordinarily smooth gray background.

FIGURE 2-28 Comparison of the three main types of image noise in isolation against an other-wise smooth gray background.

Random noise results primarily from photon arrival statistics and thermal noise. There will always be some random noise, and this is most influenced by ISO speed. The pattern of random noise changes even if the exposure settings are identical. FIGURE 2-29 shows an image that has substantial random noise in the darkest regions because it was captured at a high ISO speed.

Fixed-pattern noise includes what are called “hot,” “stuck,” or “dim” pixels. Fixed-pattern noise is exacerbated by long exposures and high temperatures. Fixed-pattern noise is also unique in that it has almost the same distribution with different images if taken under the same conditions (temperature, length of exposure, and ISO speed).

Banding noise is highly dependent on the camera and is introduced by camera electronics when reading data from the digital sensor. Banding noise is most visible at high ISO speeds and in the shadows, or when an image has been excessively brightened.

Although fixed-pattern noise appears more objectionable in FIGURE 2-28, it is usually easier to remove because of its pattern. For example, if a camera’s internal electronics know the pattern, this can be used to identify and subtract the noise to reveal the true image. Fixed-pattern noise is therefore much less prevalent than random noise in the latest generation of digital cameras; however, if even the slightest amount remains, it is still more visually distracting than random noise.

The less objectionable random noise is usually much more difficult to remove without degrading the image. Noise-reduction software has a difficult time discerning random noise from fine texture patterns, so when you remove the random noise, you often end up adversely affecting these textures as well.

FIGURE 2-29 Sample image with visible noise and a wide range of tonal levels.


The noise level in your images not only changes depending on the exposure setting and camera model but can also vary within an individual image, similar to the way contrast can vary for different regions within the same image. With digital cameras, darker regions contain more noise than brighter regions, but the opposite is true with film.

FIGURE 2-30 shows how noise becomes less pronounced as the tones become brighter (the original image used to create the patches is shown in FIGURE 2-31.

FIGURE 2-30 Noise is less visible in brighter tones

FIGURE 2-31 The original image used to create the four tonal patches in Figure 2-30

Brighter regions have a stronger signal because they receive more light, resulting in a higher overall SNR. This means that images that are underexposed will have more visible noise, even if you brighten them afterward. Similarly, overexposed images have less noise and can actually be advantageous, assuming that you can darken them later and that no highlight texture has become clipped to solid white.


Noise fluctuations can be separated into two components: color and luminance. Color noise, also called chroma noise, usually appears more unnatural and can render images unusable if not kept under control. Luminance noise, or luma noise, is usually the more tolerable component of noise. FIGURE 2-32 shows what chroma and luma noise look like on what was originally a neutral gray patch.

FIGURE 2-32 Chroma and luma noise

The relative amounts of chroma and luma noise can vary significantly depending on the camera model. You can use noise-reduction software to selectively reduce either type of noise, but complete elimination of luminance noise can cause unnatural or plastic-looking images.

Noise is typically quantified by the intensity of its fluctuations, where lower intensity means less noise, but its spatial frequency is also important. The term fine-grained noise was used frequently with film to describe noise with fluctuations occurring over short distances, resulting in a high spatial frequency. These two properties of noise often go hand in hand; an image with more intense noise fluctuations will often also have more noise at lower frequencies (which appears in larger patches).

Let’s take a look at FIGURE 2-33 to see why it’s important to keep spatial frequency in mind when assessing noise level.

FIGURE 2-33 Similar intensities, but one seems more noisy than the other

The patches in this example have different spatial frequencies, but the noise fluctuates with a very similar intensity. If the “low versus high frequency” noise patches were compared based solely on the intensity of their fluctuations (as you’ll see in most camera reviews), then the patches would be measured as having similar noise. However, this could be misleading because the patch on the right actually appears to be much less noisy.

The intensity of noise fluctuations still remains important, though. The example in FIGURE 2-34 shows two patches that have different intensities but the same spatial frequency.

FIGURE 2-34 Different intensities but same spatial frequency

Note that the patch on the left appears much smoother than the patch on the right because low-magnitude noise results in a smoother texture. On the other hand, high-magnitude noise can overpower fine textures, such as fabric and foliage, and can be more difficult to remove without destroying detail.


Now let’s experiment with actual cameras so you can get a feel for how much noise is produced at a given ISO setting. The examples in FIGURE 2-35 show the noise characteristics for three different cameras against an otherwise smooth gray patch.

FIGURE 2-35 Noise levels shown using best JPEG quality, daylight white balance, and default sharpening

You can see how increasing the ISO speed always produces higher noise for a given camera, but that the amount of noise varies across cameras. The greater the area of a pixel in the camera sensor, the more light-gathering ability it has, thus producing a stronger signal. As a result, cameras with physically larger pixels generally appear less noisy because the signal is larger relative to the noise. This is why cameras with more megapixels packed into the same-sized camera sensor don’t necessarily produce a better-looking image.

On the other hand, larger pixels alone don’t necessarily lead to lower noise. For example, even though the older entry-level camera has much larger pixels than the newer entry-level camera, it has visibly more noise, especially at ISO 400. This is because the older entry-level camera has higher internal or “readout” noise levels caused by less-sophisticated electronics.

Also note that noise is not unique to digital photography, and it doesn’t always look the same. Older devices, such as this CRT television image, often suffered from noise caused by a poor antenna signal (as shown in FIGURE 2-36).

FIGURE 2-36 Example of how noise could appear in a CRT television image


In this chapter, you learned about several unique characteristics of digital images: bit depth, sensors, image histograms, and image noise. As you’ve seen, understanding how the camera processes light into a digital image lets you evaluate the quality of the image. It also lets you know what to adjust depending on what kind of noise is present. You also learned how to take advantage of certain types of image noise to achieve a particular effect.

In the next chapter, you’ll build on your knowledge of exposure from Chapter 1 and learn how to use lenses to control the appearance of your images.


Understanding Camera Lenses

NOW THAT YOU UNDERSTAND HOW exposure and digital data work, the next most important thing is choosing the appropriate lens to control how the image appears. We’ll discuss camera lenses first because they’re the camera equipment you need for each and every shot, regardless of style. They also have a widespread influence on both the technical and creative aspects of photography.

In this chapter, you’ll learn how light gets translated into an image. You’ll start by learning the different components of a typical camera lens to understand how focal length, aperture, and lens type affect imagery. You’ll also learn the trade-offs with zoom lenses versus prime or fixed focal length lenses. Then you’ll learn how to use wide-angle lenses, which are an important tool for capturing expansive vistas and exaggerating relative subject size. Finally, you’ll learn about telephoto lenses, which you can use to magnify distant subjects and layer a composition.


Understanding camera lenses gives you more creative control in your digital photography. As you’ll soon learn, choosing the right lens for the task is a complex trade-off between cost, size, weight, lens speed, and image quality. Let’s begin with an overview of the concepts you’ll need to understand about how camera lenses can affect image quality, focal length, perspective, prime versus zoom, and f-number.


Unless you’re dealing with a very simple camera, your camera lenses are actually composed of several lens elements. Each of these elements directs the path of light rays to re-create the image as accurately as possible on the digital sensor. The goal is to minimize aberrations while still utilizing the fewest and least expensive elements. FIGURE 3-1 shows how the elements that make up a typical camera lens focus light onto the digital sensor.

FIGURE 3-1 Lens elements

As you can see here, the lens elements successfully focus light onto a single point. But when points in the scene don’t translate back onto single points in the image after passing through the lens, optical aberrations occur, resulting in image blurring, reduced contrast, or misalignment of colors (or chromatic aberration). Lenses may also suffer from distortion or vignetting (when image brightness decreases radially and unevenly). Each of the image pairings in FIGURE 3-2 illustrates effects on image quality in extreme cases.

Any of these aberrations is present to some degree with any lens. In the rest of this chapter, when we say that a lens has lower optical quality than another lens, it means that it suffers from some combination of the artifacts shown in FIGURE 3-2. Some of these lens artifacts may not be as objectionable as others, depending on the subject matter.

FIGURE 3-2 Examples of aberrations


Because the focal length of a lens determines its angle of view, or the angle between the edges of your entire field of view, it also determines how much the subject will be magnified for a given photographic position. For example, wide-angle lenses have short focal lengths, whereas telephoto lenses have longer corresponding focal lengths. FIGURE 3-3 shows how focal length affects how wide or narrow the angle of view is.

FIGURE 3-3 Short and long focal lengths

You’ll hear people say that focal length also determines the perspective of an image, which is how your subjects appear in relation to each other when viewed from a particular vantage point. But strictly speaking, perspective only changes with your location relative to the subject. For example, if you try to fill the frame with the same subjects using both a wide-angle lens and a telephoto lens, perspective does indeed change, but only because you are forced to move closer to or farther from the subject to achieve the same framing. FIGURE 3-4 demonstrates how this is true.

You can see that these two shots have the same subjects in the frame but are taken using different lenses. To achieve the same framing with both shots, you have to step back further when using the longer focal length than when using the shorter focal length. For these scenarios, the wide-angle lens exaggerates or stretches perspective, whereas the telephoto lens compresses or flattens perspective, making objects appear closer than they actually are.

Perspective can be a powerful compositional tool in photography, and when you can photograph from any position, you can control perspective by choosing the appropriate focal length. Although perspective is technically always the same regardless of the focal length of your lens, it can change when you physically move to a different vantage point. As you can see in FIGURE 3-4, the subjects within the frame remain nearly identical, but the relative sizes of objects change such that the distant doorway becomes smaller relative to the nearby lamps.

FIGURE 3-4 How focal length affects the angle of view


TABLE 3-1 provides an overview of what focal lengths are required for a lens to be considered a wide-angle or telephoto lens, in addition to their typical uses.

TABLE 3-1 Typical Focal Lengths and Their Uses




Less than 21 mm

Extreme wide angle


21–35 mm

Wide angle


35–70 mm


Street and documentary

70–135 mm

Medium telephoto


135–300+ mm


Sports, birds and wildlife

Note that these focal lengths listed are just rough ranges; actual uses may vary considerably. Many photographers use telephoto lenses in distant landscapes to compress perspective, for example.

Lens focal length can also influence other factors. For example, telephoto lenses are more susceptible to camera shake because even the smallest hand movements become amplified when your angle of view is narrow; this is similar to the shakiness you experience while trying to look through binoculars.

On the other hand, wide-angle lenses are generally designed to be more resistant to flare, an artifact caused by non-image-forming light. This is in part because designers assume that the sun is more likely to be within the frame. Finally, medium-angle and telephoto lenses generally yield better optical quality for similar price ranges.


Lens focal lengths are for 35 mm or “full frame” types of cameras. If you have a compact, mirrorless, or digital SLR camera, you likely have a different sensor size. To adjust the preceding numbers for your camera, look up your camera’s crop factor online and multiply that by the focal length of your lens.


Although the focal length of a lens alone doesn’t control the sharpness of an image, it can make it easier to achieve a sharp, handheld photograph—everything else being equal. This is because longer focal lengths require shorter exposure times to minimize blurring caused by shaky hands.

To demonstrate how this works, imagine trying to hold a laser pointer steady. When you point the laser at a nearby object, its bright spot appears steadier on a closer object, but it jumps around noticeably more for objects farther away. FIGURE 3-5 illustrates this example.

FIGURE 3-5 Types of vibrations in a shaky laser pointer

This is primarily because slight rotational vibrations are magnified greatly with distance. On the other hand, if only up-and-down or side-to-side vibrations are present, the laser’s bright spot does not change with distance. In practice, this typically means that longer focal-length lenses are more susceptible to shaky hands because these lenses magnify distant objects more than shorter focal-length lenses, similar to how the laser pointer jumps around more with distant objects due to rotational vibrations.

A common rule of thumb for estimating how fast the exposure needs to be for a given focal length is the one-over-focal-length rule, which states that for a 35 mm camera, the handheld exposure time needs to be at least as fast as 1 over the focal length in seconds. In other words, when you’re using a 200 mm focal length on a 35 mm camera, the exposure time needs to be no more than 1/200th of a second; otherwise, you might get blurring.

Keep in mind that this rule is just for rough guidance. Some photographers may be able to hand hold a shot for much longer or shorter times, and some lenses that include image stabilization are more tolerant of unsteady hands. Users of digital cameras with cropped sensors, or sensors smaller than 35 mm (such as Micro Four Thirds, APS-C, and compact cameras), need to convert into a 35 mm equivalent focal length.


A zoom lens allows us to vary the focal length within a predefined range. The primary advantage of a zoom lens is that it’s easier to achieve a variety of compositions or perspectives without having to change lenses. This advantage is often critical for capturing dynamic subject matter, such as in photojournalism and children’s photography.

Keep in mind that using a zoom lens doesn’t necessarily mean that you no longer have to change your position; it just gives you more flexibility. FIGURE 3-6 compares an image taken from the photographer’s original position with two alternatives: zooming in (which changes the composition) and moving in closer while zooming out (which maintains composition but changes perspective). For the purposes of this discussion, having a different composition means that the subject matter framing has changed, whereas having a different perspective means that the relative sizes of near and far objects have changed.

FIGURE 3-6 Different ways to use zoom lenses

Using a zoom lens, you can get a tighter composition without having to crop the image or change positions by simply zooming in on the subject. If you had used a prime lens instead, a change of composition would not have been possible without cropping the image.

You can also change the perspective by zooming out and getting closer to the subject. Alternatively, to achieve the opposite perspective effect, you could zoom in and move farther from the subject.


Unlike zoom lenses, prime lenses (also known as fixed focal length lenses) don’t allow us to vary focal length within a predefined range. If zoom lenses give you more flexibility, you may be wondering why you would intentionally restrict options by using a prime lens. Prime lenses existed long before zoom lenses were available, and they still offer many advantages over their more modern counterparts. When zoom lenses first arrived on the market, photographers often had to be willing to sacrifice a significant amount of optical quality. However, more recent high-end zoom lenses generally do not produce noticeably lower image quality, unless the image is scrutinized by the trained eye or is in very large print.

The primary advantages of prime lenses are cost, weight, and speed. An inexpensive prime lens can generally provide as good, if not better, image quality as a high-end zoom lens. Additionally, if only a small fraction of the focal length range is necessary for a zoom lens, then a prime lens with a similar focal length will provide the same functionality while being significantly smaller and lighter. Finally, the best prime lenses almost always offer better light-gathering ability, or larger maximum aperture, than the fastest zoom lenses. This light-gathering ability is often critical for low-light sports or theater photography and other scenarios where a shallow depth of field is necessary.

For lenses in compact digital cameras, a 3×, 4×, or higher zoom designation refers to the ratio between the longest and shortest focal lengths. Therefore, a larger zoom designation doesn’t necessarily mean that the image can be magnified any more. It might just mean that the zoom has a wider angle of view when fully zoomed out. Additionally, digital zoom is not the same as optical zoom, because the former artificially enlarges the image through a digital process called interpolation, which actually degrades detail and resolution. Read the fine print to ensure that you’re not misled by your lens’s zoom designation.


The aperture range of a lens refers to how much the lens can open up or close to let in more or less light, respectively. Apertures are listed in terms of f-numbers, which quantitatively describe the relative light-gathering area, as shown in FIGURE 3-7.

FIGURE 3-7 The f-numbers (from left to right) are f/2.0, f/2.8, f/4.0, and f/5.6.

As you learned in Chapter 1, larger aperture openings have lower f-numbers, which is often confusing to camera users. Because aperture and f-number are often mistakenly interchanged, I’ll refer to lenses in terms of their aperture size for the rest of this book. Photographers also describe lenses with larger apertures as being faster, because for a given ISO speed, the shutter speed can be made faster for the same exposure. Additionally, a smaller aperture means that objects can be in focus over a wider range of distance—a concept you also explored in Chapter 1 when we discussed depth of field.

TABLE 3-2 summarizes the effect f-numbers have on shutter speed and depth of field.

TABLE 3-2 How F-Numbers Affect Other Properties

Corresponding impact on other properties:


Light-gathering area

Required shutter speed

Depth of field









As you can see, the f-number changes several key image properties simultaneously; as a photographer, you will want to make sure that all such changes are desirable for your particular shot.


When you’re considering purchasing a lens, you should know that specifications ordinarily list the maximum apertures (and maybe the minimum). Lenses with a greater range of aperture settings provide greater artistic flexibility, in terms of both exposure options and depth of field. The most important lens aperture specification is perhaps the maximum aperture, which is often listed on the box along with focal length(s), as shown in FIGURE 3-8.

FIGURE 3-8 Example lens specifications from retail packaging

An f-number of 1.4 may be displayed as 1:1.4, instead of f/1.4, as shown in FIGURE 3-9 for the 50 mm f/1.4 lens.

FIGURE 3-9 Example maximum aperture label on the front of a lens

Portrait and indoor sports photography or theater photography often requires lenses with very large maximum apertures to be capable of a narrower depth of field or a faster shutter speed, respectively. The narrow depth of field in a portrait helps isolate the subject from the background. For digital SLR cameras, lenses with larger maximum apertures provide significantly brighter viewfinder images, which may be critical for night and low-light photography. These also often give faster and more accurate auto-focusing in low light. Manual focusing is also easier using maximum apertures because the image in the viewfinder has a narrower depth of field, making it more visible when objects come into or out of focus.

TABLE 3-3 summarizes some typical maximum apertures you’ll find on a digital camera. You can see how even small changes in f-number lead to substantial changes in light-gathering area, because each halving of the f-number results in a quadrupling of the light-gathering area.

Minimum apertures for lenses are generally nowhere near as important as maximum apertures. This is primarily because the minimum apertures are rarely used due to photo blurring from lens diffraction, and it’s because these may require prohibitively long exposure times. When you want extreme depth of field, you might consider choosing lenses with a smaller minimum aperture or a larger maximum f-number to allow for a wider depth of field.

TABLE 3-3 Typical Maximum Apertures






Fastest available prime lenses (for consumer use)



Fast prime lenses



Fast prime lenses



Fastest zoom lenses (for constant aperture)



Lightweight zoom lenses or extreme telephoto primes



Lightweight zoom lenses or extreme telephoto primes


Finally, some zoom lenses on digital SLR and compact digital cameras list a range of maximum aperture, which depends on how far you have zoomed in or out. These aperture ranges therefore refer only to the range of maximum aperture, not overall range. For example, a range of f/2.0–3.0 means that the maximum available aperture gradually changes from f/2.0 (fully zoomed out) to f/3.0 (fully zoomed in) as focal lengths change. The primary benefit of having a zoom lens with a constant maximum aperture instead of a range of maximum aperture is that exposure settings are more predictable, regardless of focal length. FIGURE 3-10 shows an example of a lens that specifies the range of maximum aperture.

FIGURE 3-10 Range of maximum aperture

Also note that just because you don’t use the maximum aperture of a lens often, this does not mean that such a wide aperture lens is unnecessary. Lenses typically have fewer aberrations when they perform the exposure stopped down one or two f-stops from their maximum aperture, such as a setting of f/4.0 on a lens with a maximum aperture of f/2.0. This means that if you want the best-quality f/2.8 photograph, an f/2.0 or f/1.4 lens may yield higher quality than a lens with a maximum aperture of f/2.8.

Other considerations for buying a lens are cost, size, and weight. Lenses with larger maximum apertures are typically much heavier, bigger, and more expensive. For example, minimizing size and weight may be critical for wildlife, hiking, and travel photography because it often utilizes heavier lenses or requires carrying equipment for extended periods of time.


A wide-angle lens can be a powerful tool for exaggerating depth and the relative size of subjects in a photo. FIGURE 3-11 is an example of the kind of exaggeration you can achieve.

As you can see, the ultra-wide lens creates an exaggerated sky by manipulating the relative size of the near clouds versus the far clouds. For example, the clouds at the top of the frame appear as they would if you were looking directly up, whereas the ones in the distance appear as they would if you were looking at them from the side. This creates the effect of the clouds towering over you, resulting in a more evocative image.

However, wide-angle lenses are also one of the most difficult types of lenses to use. In this section, I dispel some common misconceptions and discuss techniques for taking full advantage of the unique characteristics of a wide-angle lens.

FIGURE 3-11 Example of exaggerated depth achieved with a 16 mm ultra-wide-angle lens


A lens is generally considered a wide-angle lens when its focal length is less than around 35 mm on a full-frame camera. This translates into an angle of view that is greater than about 55 degrees across your photo’s widest dimension, which begins to create a more unnatural perspective compared to what you would see with your own eyes.

The definition of ultra-wide is a little fuzzier, but most agree that this realm begins when focal lengths are around 20–24 mm or less. On a compact camera, wide angle is often what you get when you’re fully zoomed out, but ultra-wide is usually never available without a special lens adapter. Regardless of what’s considered wide angle, the key concept you should take away is that the shorter the focal length, the easier it is to notice the unique effects of a wide-angle lens.


A common misconception is that wide-angle lenses are primarily used when you cannot step far enough away from your subject but still want to capture the entire subject in a single camera frame. But this is not the only use of a wide-angle lens, and if you were to only use it this way, you’d really be missing out. In fact, wide-angle lenses are often used to achieve just the opposite: when you want to get closer to a subject!

Here are the characteristics that make a wide-angle lens unique:

Its image encompasses a wide angle of view

It generally has a close minimum focusing distance.

Although these might seem pretty basic, they result in a surprising range of possibilities. In the rest of this section, you’ll learn how to take advantage of these traits for maximum impact in wide-angle photography.


Obviously, a wide-angle lens is special because it has a wide angle of view—but what does this actually mean? A wide angle of view means that both the relative size and distance are exaggerated when comparing near and far objects. This causes nearby objects to appear gigantic and faraway objects to appear unusually tiny and distant. The reason for this is the angle of view, as illustrated in FIGURE 3-12.

FIGURE 3-12 Comparing two angles of view

FIGURE 3-13 Exaggerated 3-inch flowers using a 16 mm ultra-wide-angle lens

FIGURE 3-14 Disproportionate body parts caused by a wide-angle lens

Even though the two cylinders in the figure are the same distance apart, their relative sizes become very different when you fill the frame with the closest cylinder. With a wider angle of view, objects that are farther away comprise a much lower fraction of the total angle of view.

A misconception is that a wide-angle lens affects perspective, but strictly speaking, this isn’t true. Recall from FIGURE 3-4 that perspective is only influenced by where you are when you take a photograph. However, in practice, wide-angle lenses often cause you to move much closer to your subject, which does affect perspective.

You can use this exaggeration of relative size to add emphasis and detail to foreground objects, while still capturing expansive backgrounds. To use this effect to full impact, you’ll want to get as close as possible to the nearest subject in the scene. FIGURE 3-13 shows an example of this technique in action.

In this extreme wide-angle example, the nearest flowers are almost touching the front of the lens, which greatly exaggerates their size. In real life, these flowers are only a few inches wide!

However, you need to take extra caution when photographing people using wide-angle lenses. Their noses, heads, and other features can appear out of proportion if you get too close to them when taking the photo, as shown in FIGURE 3-14.

In this example, the boy’s head has become abnormally large relative to his body. This can be a useful tool for adding drama or extra character to a candi