8

Color Grading

In Chapter 7, we looked at the different methods for ordering footage according to the cut produced by the editor. In this chapter, we’ll look at one of the most important processes that can be applied to the material: color grading. This term encompasses both the act of processing colors to make them match, as well as of purposefully distorting colors from the original source for aesthetic purposes. Before we look at the various color-grading options available to the digital intermediate pipeline, we will first look at the issue of color management.

8.1 Color Management

Color science is a deep and complex subject. Most decisions concerning color are made from an intuitive point of view rather than a scientific one. For example, we decide which color clothing to wear by selecting an item of a particular color and trying it on, rather than by sitting at a desk with a calculator and performing a series of complex calculations. Color grading, even color-matching is also more of an intuitive process than it’s an exact science. The most notable skill of a competent colorist is their ability to recognize when the colors look “right,” rather than their ability to just operate the color grading equipment.

8.1.1 Color Perception

Our ability to perceive color is based upon physiological as well as psychological features of the human body. Color is really just an imagined concept. It doesn’t exist in any real sense, but is a product of our brains interpreting the input from our visual system.

When each ray of visible light (light being a small part of the electromagnetic spectrum) hits a surface, part of it’s reflected and part of it’s absorbed (some of it may also be transmitted, if the surface happens to be transparent of translucent). The reflected light continues until it strikes another surface, which causes more reflections and absorptions, until it’s completely dissipated (used up). Just as different sounds are a combination of vibrations on different wavelengths, light can be subdivided into separate wavelengths. Specific wavelengths represent specific colors, and the gradual transition from one wavelength (measured in nanometers, fractions of a millimeter) to another creates the entire spectrum, or rainbow of colors. As well as absorbing some or all of the overall energy content of light, some materials are better at reflecting or absorbing specific wavelengths of light. The combination of wavelengths reflected by different materials is what gives objects their color.

If observers are in the room, some of the reflected light will enter their eyes and hit their retinas where it is absorbed, producing a photochemical reaction in the retinal “photoreceptor” cells (a process similar to how photography works). This reaction forms the basis of our ability to see.

The human eye works in the same way as a camera (whether film, video, or digital). This is no coincidence—camera manufacturers borrow their basic designs from nature. Practically every component in a camera, from the focusing and aperture mechanisms to the lightdetection mechanisms, has an equivalent in the human eye. The human eye has four different types of photoreceptor cells—a “rod” cell, which detects differences in luminance, and three different “cone” cells, used for resolving color, each responding to light of specific wavelengths. Each of these cells works in an analog fashion, meaning that an almost unlimited range of luminance levels are perceivable, unlike digital images, in which the number of levels are restricted (as determined by the format’s bit depth). Of all the retinal cells, the rod cells are the most numerous. In fact, more rod cells exist than all the cone cells put together. The net result is that humans are better at perceiving differences in luminance than differences in color. Moreover, we perceive luminance as being far more important than the specific color content, which is why we can interpret blackand-white images without any problem but can’t correctly interpret images that have no variance in luminance.

For instance, the amount of light entering our eye affects how bright we perceive a scene to be.1 In actual fact, it’s completely relative, affected by the scene’s contrast ratio. In other words, we actually perceive that bright parts of a scene are based upon the darkness of other parts.

The same is true of color. People can perceive colors wrongly in certain situations. For example, studies have shown that people don’t notice when a television’s color control is gradually adjusted over the course of a film. The screen may be displaying blue skin tones by the end of the film, but the audience’s perception continually compensates, and the color change goes unnoticed. This process is similar to our eyes adjusting when the lighting in a room is slowly adjusted. Context is important, too. A red square on a black background looks different than a red square on a white background, even though the color on each is measurably identical.

Don’t believe me? Take a look at the images that follow. The image on the right appears brighter, although it isn’t. And even though it’s possible to train your eyes so that both images in fact appear the same, remember that the majority of people (i.e., the audience) who see them won’t have trained their eyes in this way.

images

Figure 8–1   In this example, the two images are identical but are perceived differently. Had color backgrounds been used, the colors in the image would also appear different in each image

All of these factors help explain why color grading has to be done intuitively rather than by manipulating numbers.

8.1.2 Colorimetry

Colorimetry is the science of color measurement. A color can be broken down into two measurable components: “luminance” and “chromaticity,” and several other components can be derived from them. Luminance is a measurement of how much energy is inherent in the color, usually measured in candelas per meter squared (or in footcandles).2 It’s similar to a measurement of brightness, except brightness is defined as being more relative, based upon a variety of factors, such as surrounding colors. “Lightness” is a logarithmic measurement of luminance, taking into account the eye’s nonlinear response to light, which is a more intuitive measurement. In practice, each of these terms, while having subtle differences, are interchangeable. When talking about color in the digital intermediate environment, we refer more to changes in luminance and brightness (as in: “that’s too dark!”, “increase the luminance!”, and “make it lighter!”), rather than discussing absolute values of light energy. Thus, the specific nomenclature amounts to the same thing. In broad terms, a lighter color is closer to white, while a less light (darker) color is closer to black.

images

Figure 8–2   An image at different brightness levels (see also the Color Insert)

“Saturation” is the measurement of a color’s purity, the color’s strength in relation to its brightness. Because a color can be composed of light of several different wavelengths, the greater the variety of wavelengths in a particular color, the less saturated the color. The less saturated a color, the closer to gray in appearance it becomes. The concept is similar to adding more water to a juice drink. Saturated colors are perceived to be more vibrant and more artificial.

“Hue” is the measurement of a color’s actual color component. It can be thought of as a specific, dominant wavelength of visible light in a color sample. It’s typically measured on a color wheel where all the colors in the visible spectrum are positioned around a circle. The hue is found by measuring the angle from the top (which is usually red) of the color in question.

images

Figure 8–3   The same image shown with different saturation levels (see also the Color Insert)

Another method for measuring light is to use an “additive” color model that mimics the properties of light. White light can be formed by mixing red, green, and blue light. Because of this, any color can be formed by mixing these three components in different proportions. Black is the absence of any light at all. The different proportions define the color (i.e., hue and saturation) component, while the overall amount of light defines the luminance level.

A “subtractive” color model mimics the absorption properties of different types of material. Cyan, yellow, and magenta can be mixed in different proportions to make any color. The absence of any color leaves white, while a high level of each color tends toward black. This method is most transferable to the creation of printed colors, mixing inks for each of the colors (onto white paper) to generate any other color.

In practice though, black is very difficult to create using colored inks, and so a black component is often mixed in with the other three to improve the ability to form colors.

“Spot-process” colors, such as those created by Pantone (www.pantone.com), are strictly defined individual colors such as “8201” that represent the same color regardless of the format. Such colors are typically accompanied by a swatch book, a visual reference of the printed colors.

The CIE (Commission Internationale de L’Éclairage) model of color (also the variants “Lab,” “XYZ,” or “Luv”) is the most widely accepted scientific method of describing a color. One luminosity component is combined with a chromaticity coordinate to derive the desired color. This method is independent of the medium containing the color (be it ink on paper, projected light, or a LCD display), and it can describe colors that are outside of the human gamut (i.e., colors that the human eye can’t perceive).

Any of these methods might be used to represent a digital image, depending upon the file formats and software. Many of them can be applied to each other. For example, with an RGB-encoded image, it’s possible to adjust the picture’s hue content.

The Color Wheel

Many methods are available for creating or choosing colors within a computer, such as using solid-color swatches or entering values (say, for the red, green, and blue) components. One of the most intuitive ways for creating or choosing colors is through the use of a color wheel. A typical color wheel draws all the different hues in a circle; each color varies in some attribute, usually saturation, toward the center of the scale. Using this method, a colorist is able to quickly find the particular color desired. In fact, this method can be used to pick combinations of any two attributes of a color. However, a color is normally derived from (at least) three different parameters, and so a third value must be attached to the wheel. Most color-picking systems normally have this additional parameter in the form of a “slider,” so that an additional lightness slider might be included in the hue/saturation wheel.

images

Figure 8–4   A typical color wheel (see also the Color Insert)

Alternatively, a 3D color cylinder plots the third parameter in 3D space. This method is less frequently used, because it’s difficult to represent a 3D shape on a 2D screen, without presenting it from multiple angles (which would defeat the purpose of having such a compact interface in the first place).

8.1.3 Color Reproduction

Using the correct equipment, it’s possible to obtain an accurate measurement for the color of an object in a scene. Photographers often use light meters to measure the luminance incident in a scene or reflected from a particular point. Density readings can be made for film samples, and a “vectorscope” can be used to measure different qualities of a video signal.

With digital images, it’s very easy to extract color information for individual pixels in a digital image, using suitable image analysis software. In spite of this, it’s extremely difficult to ensure that any given color is reproduced correctly on different media, or under different conditions. A large part of this is certainly a perceptual issue. As mentioned already, a color even looks different depending upon the surroundings. But it’s also difficult to reproduce colors that are objectively the same—that is, ones that produce the same measurements.

One reason is due to differences in color-producing devices. Many different colors can be made using inks on paper, for example, that can’t be accurately re-created using LCD crystals. A whole range of colors can be projected from film that can’t be re-created using the phosphors that make up an image on a television screen. Each medium has a specific, limited range of colors that it can produce, known as the color “gamut” of the medium. For the most part, these gamuts overlap somewhat, meaning colors within the areas of overlap can be directly converted between different formats without any problem. But areas that don’t overlap are said to be “out of gamut,” and some other solution must be found to create the color.

What usually happens to out-of-gamut colors is that the closest match is found. This is a simple method, and works well in certain situations, such as where the gamuts of the two different mediums are reasonably similar to begin with. However, this method can produce “clipping” artifacts, where a range of colors in one gamut may be converted to a single closest match color in the new format, resulting in a single band of solid color.

It should be noted that the issue of out-of-gamut colors can only really be solved optimally by using a digital system. The system needs to have some degree of intelligence in order to ascertain which colors it can’t re-create accurately. For example, when creating a print from photographic film, no options are available for controlling how out-ofgamut colors are handled. The only solution, aside from using a digital intermediate, is to perform some rudimentary color grading using printer lights, which affects the entire image rather than just the out-of-gamut colors.

But an even bigger problem arises when accurately reproducing colors: there’s a high degree of variance in viewing conditions and display properties, even when viewing from the same source. For example, the same VHS video tape can be played in a number of different locations, and the color content will appear to be different, even though the tape’s contents haven’t changed. The ambient lighting in the room may be slightly brighter, causing the display to look “washed out” (or low contrast). The type of monitor may display the same colors differently from each other. Even two of the same make and model of television, playing the same source video side by side, might produce visibly different results. The reason for this is lack of calibration.

8.1.4 Calibration

Over the course of a few years, the colors on most display devices deviate from their original state. Phosphors wear down, LCD elements and light bulbs gradually burn out, and other components weaken. The net result is that lots of minor faults caused by hardware deterioration add up to a large drift in the displayed image.

Fortunately, most display devices have methods to adjust the image for different environment conditions, as well as for general wear and tear. Most televisions have controls to adjust the image’s brightness and contrast, and its hue and saturation. Computer monitors also have methods for controlling the image, either through hardware (usually a control on the monitor itself) allowing the monitor display to be adjusted to better match the incoming picture signal, or through software, correcting the signal sent to the monitor instead.

The purpose of calibration is to ensure that two different devices display the same image the same way. It’s also more than that though. Not only do two different monitors need to match, but they need to match an explicit standard. A properly calibrated monitor needs to match a specified set of parameters, so that every calibrated monitor matches every other calibrated monitor, regardless of location or make of device. The simplest and most effective way to do so is through the use of a test pattern such as a PLUGE (Picture Line-Up Generation Equipment) test pattern, or color bars.

Almost all broadcast-quality video tapes begin with bars. A bar is a standard signal containing a calibrated picture and audio reference that is recorded before the program starts. The idea is that the color bars recorded onto the tape are generated using equipment calibrated to SMPTE specifications, meaning they’re standardized throughout the motion picture industry. It can therefore be assumed that the bars look correct on any calibrated monitor and produce expected results on any calibrated vectorscope. Therefore, anytime the bars don’t display as expected on a calibrated system, it can be assumed that the signal on the tape must be adjusted.3 The signal must be corrected, using controls on the video player, so that the bars on the tape display correctly. Once the bars display properly in a calibrated system, the accompanying footage also displays as intended.

The problem with this system is that it relies on everyone using properly calibrated equipment. If, for example, the digital intermediate facility creating the video tape didn’t use a calibrated system, then it’s entirely possible that the bars won’t display accurately. Worse, it may in turn mean that the accompanying picture might be correct when seen onscreen but not when the bars are adjusted. In practice, however, most facilities, especially those working on large-scale projects, do work within a properly calibrated environment, and calibration issues rarely occur.

With film, color balance is mostly determined by the film stock and the type of lighting (filters and colored gels can also play a part however). Photographers and cinematographers usually record only a grayscale “wedge” to ensure correct luminance levels (or exposure settings). Color balancing is almost always done as a post-developmental process, and unlike most video transmissions, it’s done subjectively, by analyzing the image (either using an automatic process or a trained laboratory grader). For situations requiring some degree of objective reference, a standard Macbeth ColorChecker chart (www.gretagmacbeth.com) can be used.

images

Figure 8–5   An example PLUGE (left) and a SMPTE color bar (right) (see also the Color Insert)

8.1.5 Gamma

The digital world has no equivalent calibration system. Each digital image isn’t accompanied by a set of bars to ensure proper coordination between two different displays. But this isn’t really necessary. Digital images are specified mathematically, not by a signal that’s subject to noise, deterioration, or interference. Every computer system reads the values from a digital image accurately regardless of how far the image has to be transmitted or how many times it has been copied. If a digital image looks wrong, it’s normally because the display system has a problem, rather than because of an issue with the file.

That’s not to say that calibration doesn’t play a part, however. To calibrate a computer monitor, you typically adjust the “gamma” of the monitor until a test pattern looks (or is measured to be) correct. The gamma parameter is based upon the way monitors display images. The brightness of a single pixel displayed on a monitor is controlled by the amount of energy projected at it. This relationship between the pixel brightness and energy level is nonlinear—it takes more energy to make a dim pixel appear brighter than it does to make a bright pixel brighter. The exact nature of this nonlinear relationship is resolved by using “gamma-correction,” which effectively controls how bright a particular luminance level is displayed. This is normally specified by a single value, where 1.0 represents a completely linear display. Typical values are between 1.8 and 2.2 for most computer monitors.

When monitors need calibrating, the gamma-correction is adjusted so that images display correctly. The simplest way to do this is to look at a test pattern, usually comprising several shades of gray, and adjusting the gamma-correction value until the image appears correct. There are also methods for taking measurements directly from the surface of monitors using specialized calibration kits, which is by far the most accurate method.

File Gamma

Certain digital images (depending on the file format used) can have an associated gamma value saved within the image. This is usually the gamma-correction of the system that created the image. The idea is that when displayed on a different system, the file’s value is compared to the display system’s value, and the image’s luminance values are automatically adjusted to compensate for differences between the two. In reality though, the value stored within an image is useless. A gamma-correction setting stored within a file is not necessarily an indication of whether the host system was properly calibrated. It can’t possibly account for the multitude of differences in the perception of an image that aren’t covered by the gamma-correction. The file can’t record what the ambient lighting was like, nor what the manual controls on the display device were set to. In short, it doesn’t prove that the system was calibrated properly. If both the original system and the current system are calibrated properly, the image displays the same on both systems, making the gamma value redundant. Worse, a system may try to compensate for a different gamma value stored within a file, which means that you may not see the image as it really is, even on a properly calibrated system. A far better option is to disable any file format gamma values and include a reference image acquired from each different source.

8.1.6 Correlated Color Temperature

White light is rarely pure white. The actual quality of the white light is termed its “correlated color temperature,” measured in Kelvins.4 Predominantly red light is said to have a lower temperature, while blue light has a higher temperature. Sunlight varies in color temperature throughout the course of a day, because of its varying angle to the earth. Sunrise and sunset are cooler, toward the red end of the spectrum (which is why sunrises and sunsets are often accompanied by red skies), while midday sunlight is much bluer. Artificial lights also have an inherent color temperature, varying from very low temperatures to very high ones, though these temperatures don’t normally change over a period of time as sunlight does. The Appendix includes a list of the correlated color temperatures for different light sources.

Color temperature is important for monitor calibration. Monitors have different color temperatures for white depending upon their specification. For instance, the SMPTE specification requires that monitors in North America have a correlated color temperature of 6500K (although commercial televisions tend to have higher temperatures). A monitor’s correlated color temperature affects the characteristics of the monitor’s color gamut. If red, green, and blue light combine to create white at a particular temperature on one monitor, while another monitor has a different color temperature, then it’s safe to assume that the color reproduction of the two monitors will be very different.

Color temperature also plays a role in shooting the scene. Video recorded during a shoot must be correctly “white-balanced,” and a film camera must use a film stock appropriate to the lighting conditions and type of lights that are used. Otherwise, all the white light in the scene may be recorded with a color cast (or tint). For the purposes of color grading the digital intermediate, we tend not to worry about the color temperature mismatches of the original shoot and instead focus on which adjustments (if any) have to be made to the result. So if a scene is shot on daylight-balanced film under fluorescent lights, the result, while not ideal, can be corrected by simply removing the green color cast.

8.1.7 Lookup Tables

One of the most efficient ways to convert images between different color spaces is to use a lookup table, or LUT. Very simply, a LUT is a table for converting values. Input values for different colors are “mapped” onto a set of output values, which is used to alter the image’s color. For example, the color red might be mapped onto the color blue in the LUT. Then, for every occurrence of red in an image the LUT is applied to, it is replaced with blue. The practical use of LUTs is a little more subtle than that, however. LUTs are normally used to correct out-of-gamut issues. For instance, for an image saved in RGB color space to be printed onto paper, the file must first be converted to a CYMK color space. To do so, you can use a LUT to convert each RGB color into an equivalent CMYK color, or its closest match where colors are out of gamut. (The LUT can also use a scaling method to change all the colors by some degree, so that the image is perceptually the same as the original.)

The use of LUTs is not just restricted to converting image data, however. In the computer visual effects industry, artists often work with scanned film that has been encoded in a logarithmic format (which better emulates the response of film). To display the film as it would be seen when printed and projected, artists typically use a “display LUT” to display the image in an RGB color space, so that it can be displayed more accurately on their monitors. In this instance, the artists don’t use the LUT to alter the image’s contents, but to alter only the way it’s displayed onscreen. This prevents unnecessary image degradation through the process of being converted through multiple color spaces (and lessens the impact of an inaccurate LUT).

LUTs are used for similar purposes in the digital intermediate pipeline. It’s very rare (unless working exclusively with video footage) for a production to use only a single color space. Film-based projects use film footage (sometimes on a variety of different film stocks, each of which may have a different color space) that is scanned (each scanner has its own color space, according to the type of lamp or lasers and the CCD elements) to a particular format (which also has a specific color space), is displayed (on a monitor or projector, for example, each with its own color spaces), and color graded (although this process is usually a purely mathematical one and the internal color space of the grading system is normally exactly the same as the file format), before being output to a variety of different formats, such as DVD and recorded film (each with specific color spaces). Most digital film-grading systems work with images stored in a film color space, so that if you were to record scanned film directly back onto film (without any color grading), minimal difference between the recorded version and the original would be perceptible. This means that a typical facility has to worry only about converting the film color space to video color space for display and video output purposes. When no interactive color grading is to be performed on the images (or when some form of automated grading is performed), it’s something of a trivial matter, because all operations are done mathematically and don’t depend upon perception. However, in pipelines requiring interactive color grading, it becomes a major problem because the accuracy of the LUTs is paramount. That’s because during interactive color grading, the colorist makes adjustments to the color, based on what he or she sees on the display. If, for instance, something looks “too red,” the colorist reduces the amount of red. But if the actual color stored in the file, without any intervention, would have actually printed out without being too red, then the grader has actually made it worse, based upon inaccurate information from the display. Similarly, something that looks correct on the monitor may actually be in need of correction, once seen on film. This is currently the most significant problem plaguing digital color grading, but it’s also the area where the most advancement is being made. Part of the problem is that it’s just not practical to continually verify all grading decisions on film, and in addition, there’s such a large difference in the color space of film and video. The other concern is that a simple LUT is based upon numerical approximations, primarily to correct specific tones, and doesn’t take into account the complex dynamic of luminance and saturation within different color spaces.

A possible solution may be to use a 3D LUT. Rather than a single table of values, 3D LUTs (also referred to as “color cubes”) contain entire mathematical models of the two color spaces in question. To overcome the color clipping issues associated with regions of out-ofgamut colors, some form of gamut scaling is used. A “linear compression” method takes the entire original gamut, and stretches and shrinks it so that all the colors fit within the new gamut. In many situations, images produced using this method are perceptibly indistinguishable from the originals, although many of the colors that were not out-of-gamut have also been altered. An alternative is to use a “soft clip,” where only the colors close to the out-of-gamut regions are manipulated, so that the majority of colors “in-gamut” retain their values, although this can occasionally produce bizarre results.

The issue with 3D LUTs is that, ultimately the content determines how an image should be displayed (which is also why high-quality grading needs to be done subjectively rather than automatically). Strategies for gamut-mapping work well for certain types of images and not others. When comparing film to video, film tends to retain a lot of detail at the extremes of the luminance range. Film can have a large amount of detail in the highlight regions, whereas video, being a linear medium, has detail distributed equally at all luminance levels. Factors such as this also explain why it’s impossible to completely reproduce certain imaging methods on others. You can approximate or stretch highlight detail on a video display, but it will never look the same. With any gamutmapping method, there will almost always be some trade-off, either in terms of color, brightness, or contrast.

images

Figure 8–6   Although display LUTS don’t directly affect the images, they’re used as part of the decision-making process to determine how the images should be modified

8.1.8 Color Management Systems

A lot of perceptual factors are involved throughout a digital intermediate production. From the way a scene looks during shooting, through the way that it looks after it has been recorded and subsequently graded, until the film’s premier, a single image can appear differently at these various stages. Much of the time this doesn’t matter, because the way the image is perceived has little or no impact on how it looks at the end, nor any impact on the image quality. However, decisions are made at certain points based on how the image is perceived, and these decisions affect the final result.

Clearly a lot of elements must be considered to ensure that grading is performed under the best conditions, yielding the most accurate and highest-quality results. As with data management, perhaps the best option is to use an all-in-one color management system to integrate the issues of calibration and gamut conversion. Such systems are designed to eliminate the need to micromanage every aspect of a display device, instead utilizing a combination of preset parameters and periodic calibration for different situations.

The simplest of the all-in-one color management systems create a profile of each display device, based upon a calibration process. The system then compares the characteristics of any image to be displayed with the display profile and applies some form of display LUT to the image. Many of these systems rely on information embedded into each digital image though, which may be neither present nor accurate.

More sophisticated color management systems exist that build profiles based upon every step of the pipeline. For example, Filmlight’s Truelight color management system (www.filmlight.ltd.uk) accounts for the characteristics of the film stock containing the original footage, the properties of the scanner, as well as the eventual output medium (such as projected film, accounting for the characteristics of the film recorder, the output film stock, and even the projector); it then combines these characteristics with a profile of the display device using a 3D LUT.

images

Figure 8–7   Perception is a factor throughout a production, but at a few key stages, it’s critical to the appearance of the final image

Many processes in the digital intermediate pipeline rely on displaying images. Many processes affect the color of an image, some altering the image permanently (such as color grading), while others modify only the way an image is displayed (such as the use of a LUT).

Ultimately though, the only time color management is necessary is when color-grading an image or when viewing a color-graded image. None of the preceding stages (such as scanning film or copying files) usually require any alteration or selection of colors based upon what is seen on a monitor, and therefore don’t require precise color reproduction.5 In theory, the preceding stages can be done without the use of a color monitor. Digital images created by scanning film or by capturing video assign colors to pixels analytically, measuring film density or video signals respectively. Except in situations where image acquisition requires an operator to make adjustments to the color output, color conversion is handled automatically.

It can be argued that color management is not even necessary prior to the grading stage, providing that color-grading is done using a competent colorist and calibrated equipment, as one of the main reasons of performing digital color grading, indeed, one of the main reasons of using a digital intermediate pipeline, is that media with mismatched colors, from a variety of sources, can all be combined together, and processed so that they’re all consistent. In theory at least, the colorist takes all of the footage and works with it until the output is suitable. However, superior image quality and faster turnaround may be possible with an-end-to-end calibration system.6

8.2 Digital Color Grading

Compared to every other method for color grading moving footage, digital color grading provides the most comprehensive set of options for altering color. Commercials and music promos have been taking advantage of these tools for many years, and film productions are now starting to as well. Whether you have a 120 minute, 35mm film production, or a DV short film on a laptop computer, you can enhance your footage in many different ways by using a digital-grading system.

One Lut to Rule Them All

It might be considered the “holy grail” of color management, but no LUT (3D or otherwise) is suitable for all possible situations. Different shots have different requirements. Certain shots might have a large amount of black, white, gray, saturated colors, or any combination. Each of these situations might require a different LUT for optimal results. Unfortunately, it isn’t practical to spend time allocating different LUTs to different shots over the course of an entire program.

A good compromise might be to determine early on the production’s overall “look” (e.g., “dark and moody,” “bright and saturated,” or “evenly balanced”) and then creating a LUT suitable for shots falling into those categories. Shots that are potentially problematic can be tested by actually outputting them individually and viewing the results projected on film.

It’s possible that using such a system may require continuous updating of the LUT throughout a production. If so, you must ensure that doing so won’t cause grading inconsistencies. Typically, the creation of each LUT involves rechecking all the work done previously and having to re-render any images that have undergone a LUT transformation.

8.2.1 Painting by Numbers

Whether you change a red fish to a yellow one, turn a gray sky cerulean blue, or match the look of two shots filmed on different days, you are just rearranging numbers. All digital-grading systems, no matter how sophisticated, don’t work in terms of specific colors, but just by performing mathematical operations. If you make something redder, the system increases the pixels’ red component, while reducing the green and blue components. If you select all the green areas, the system searches for pixels with a strong green component. Granted, some of the mathematics used to perform some grading operations are highly sophisticated, but it’s all simply math. For the purposes of interactive grading, it may not seem important to knowthis, and in fact, good grading is done intuitively. But digital-grading systems have limits, and those limits are imposed entirely by numerical constraints. Color-precision errors, clipping that occurs at the extremes of an image channel, and aliasing effects are all due to the mathematical nature of digital images (which are covered in Chapter 12)

Color-Space Independence

By far the best option is to consistently record color in absolute color values—that is, in a way that doesn’t rely on such relative scales as RGB. The “Lab” color model uses a luminosity value, coupled with chromaticity “coordinates” that provide absolute color values. This means that any color can be represented within the format, regardless of whether it can be displayed on a particular device. For this system to be used efficiently, you have to take into account the acquisition device’s color space, because any colors that are out of the devices’ gamuts may not be measured correctly. Further, if any alterations are made to the color, you also have to take into account the gamut of the display, because the operator may think that he or she is looking at a particular color but is actually looking at a color that can’t be displayed correctly. Many color management solutions track colors that may be out of gamut and include an option for highlighting them.

The exception to this situation is computer-generated (CG)material. Provided that the creation of the material doesn’t rely on any externally acquired images or color measurements, a CG image, encoded in Lab color space, can accurately record any color, even if it can’t perfectly display it. For example, 3D software can render an image of a virtual ball made out of a shiny yellow-green material (which is a notoriously difficult color to reproduce on monitors) under simulated lighting. Every single shade of yellow on the ball’s surface can be accurately recorded (with a precision dependent upon the system’s bit depth) within the Lab color space. Unfortunately, displaying the image on a monitor or printing it on a sheet of paper subjects it to the limitations of the new medium’s gamut, and some colors will be displayed incorrectly. However, the colors can be manipulated mathematically and output to different mediums with the maximum possible degree of accuracy.

Color Management Everywhere

“It doesn’t look like that on my monitor” is a frequent complaint heard throughout the digital intermediate process. Whether a studio executive is comparing the video copy to the film screening, or a cinematographer comparing the grade to a reference image on a laptop, or even two graders in disagreement, the statement is unavoidable. Most people assume that the same image is displayed in the same way on every device, but it isn’t true. And aside from potential technical issues, such as out-ofgamut colors or contrast ratios, the majority of the time this problem is caused by lack of calibration. So it’s important that everyone who is concerned about the grading process of a production ensure that they’re viewing all material on a system that has been calibrated to a specification, which more often than not should be set by the digital intermediate facility in collaboration with the director of photography. The use of hardware calibration “probes,” such as X-rite’s MonacoOptix kits (www.xrite.com) can simplify the calibration process and allow monitors to be calibrated even when using laptops on location shoots.

Having said that, it’s possible, with enough experience, to grade images simply by relying on the pixels’ numerical information. For example, an experienced colorist can tell from looking at the pixel values how that pixel will probably print on film. Because color perception is affected by a multitude of factors, including diet and, in particular, the degree of fatigue the colorist is experiencing, many graders rely on the numerical information to ensure a high-quality grade, and most grading systems offer a wealth of statistical information about individual pixels or groups of pixels, including histogram distributions and vectorscope simulations.

8.2.2 Correcting Colors

Color grading (or “color-correction” or “color timing”) is used for a variety of different purposes. Its primary use is ensuring each shot in a sequence maintains the same color balance as other shots, so a shot doesn’t look out of place and distract the viewer. This process is known as “continuity grading.” As is the case with many scenes in a production, shots may be filmed on different days, at different times of the day, or under different lighting conditions. Any of these situations can change the color balance, so when these shots are cut together, it becomes apparent that some shots have continuity with others while some shots don’t. Using continuity grading can alter the colors of individual shots so that they all match. Then when the sequence is played back, every shot matches, as if they were all filmed at the same time.

Color grading can also be used to enhance a production’s look. Colors can be altered so that they’re deeply saturated, or they’re dark and high key, to achieve a mood within the story. Colors can be removed, creating more of a monochromatic look, or they can be changed entirely. With digital-color-grading systems, these changes can be made to entire sequences, individual frames, or even just specific areas of a shot.

images

Figure 8.8 through 8.11 A digital intermediate pipeline provides endless possibilities for changing an image’s color (see also the Color Insert)

images

Figure 8–9

images

Figure 8–10

images

Figure 8–11

What is Image Quality Again?

Up until now, the main thrust of the digital intermediate pipeline has been on getting the best image quality possible, in terms of resolution and color. During color grading though, preserving image quality becomes only a secondary concern. That’s not to say it isn’t important though—it’s just that one of the main reasons for supplying the digital-grading system with such high-quality images is it creates more possibilities for color grading.

Color grading is an inherently destructive process. Ranges of color values get stretched apart or squashed together, changed entirely, and even clipped or crushed at the extremes. But at this stage, it doesn’t matter so much. Provided the grader knows the effects of each operation on the image and doesn’t use destructive processes unnecessarily, then all the changes are ultimately being made for the better. You can make a shot extremely high contrast, ignoring the fact that the resulting image may lose a lot of color detail when such operations provide the look you desire. The point is that you’re making choices that affect the image quality because you want to and not because you’ve been forced to by some limitation or oversight of the pipeline. Compressed video footage, for instance, while convenient to work with at other stages of the pipeline, can cause immense problems for color grading. If you stretch the colors apart too far, the compression artifacts that were previously imperceptible, now dominate the image, severely limiting the grader’s options. Any compromises made with quality prior to color grading will reveal themselves at this stage.

Although artistic precedent takes control over image integrity during the grading process, it’s still important to consider output requirements when making extreme changes to an image. “Selective destruction” caused by color grading is fine if no perceptible artifacts are the result, but some problems, such as “color banding,” can occur when output to other media. One of the most common situations is that when highlights are lightened for film-based projects, they can look fine but will appear to “bloom” when viewed from a video source. The Appendix lists common image operations and their effects on image quality.

8.2.3 Anatomy of a Grading System

Various different solutions are available for digital color grading—solutions ranging from the inexpensive to those costing in excess of a million dollars. Despite the system’s cost, several different paradigms are available to the digital intermediate pipeline.

The simplest option is to include grading controls as part of the conforming system. Most systems designed specifically for grading moving images are of this type. This way, the grader is always working with the most up-to-date version of the program and can watch it in the correct order. However, using such a system may prevent changes being made to the conformed data while grading is performed, or at least it may require additional licenses (at additional cost). On a smaller-scale production, when the offline edit occurs on the same system as the online, it may be possible to use color-grading tools within the editing application itself. For example, Avid’s Xpress Pro system (www.avid.com) makes it possible to edit SD video uncompressed, color graded, and output directly without moving the data to another system.

A slight variant of this approach is a separate grading system that can access the data directly from the conforming system, usually by manipulating the output of the conforming system in real time. This way, the grading system has to focus only on providing color-grading tools, and not on playback and ordering. Systems such as Pandora’s Pixi Revolution (pogle.pandora-int.com) can receive footage from another system and apply grading in real time.

An alternative option is sending individual shots to an external system, which then controls playback and has tools for color grading. This method tends to be cheaper to implement, but it makes the whole process much slower, because data usually has to be constantly transferred between various systems. This also means that the colorist has to know in advance which scenes he or she wishes to grade. There may also be compatibility issues between the conforming and grading systems.

Grading systems can make alterations to images directly, or they can employ a “grading list” system (also referred to as “grading metadata”), where the parameters for the changes are saved to a separate file or database. This option is suitable for most situations, because it enables the grading parameters to be continually refined without committing the changes permanently. Once the changes are committed, further changes can severely compromise the image quality. The disadvantage of the list-based system is that the display of each image can be slower, because the grading system has to apply the necessary transformations to each frame before it can be displayed. When trying to view a full-size image in real time (as graders often have to do), it may not be possible to apply the changes to the image fast enough to display them in time. Many grading systems get around this issue by employing caching mechanisms, but they may still take time to generate and require additional disk space.7

images

Figure 8–12   Digital-editing systems such as Avid Xpress Pro may also allow color grading to be used in an offline edit

Most grading systems apply the final grades prior to output during the process of “rendering.” The rendering process involves applying the grading parameters to each frame separately and outputting the result, either as a new file, or overwriting the original. The process of rendering can be a slow one, and “render errors” can be introduced. (See Chapter 11 for more on the rendering process.) Sufficiently fast grading systems that can apply completed grades to the source image in real time can usually output graded material without the necessity of rendering.

A typical grading system comprises two main elements—an interface containing controls for different operations, and a display for viewing the results. The specifics of the interface inevitably vary by product, and some extend the interface by providing an entire desk with buttons, dial, trackballs, and so on, to accelerate the grading process.8 The display might be extremely simple, displaying only the graded results. It might include enhancements such as split-screens, showing the image before and after the grade has been applied. It may be possible to “zoom” the display in and out (without actually affecting the image in any way) for detail work. There may even be several displays, each showing different footage or versions of a grade. The grading system may also have a method for storing different versions of grades, as well as providing the ability to save and load presets, and a system for using reference images.

In addition to monitoring the graded image, it can also be useful to obtain analytical information about the image. To achieve this, the grading system might have the option to display tools such as vectorscopes or image histograms within the software, or it may be that these devices can be attached separately, monitoring the system’s display output.

8.2.4 Global Grading

Regardless of the system’s specifics, most digital colorists use a similar work flow. The first stage is to set the image’s overall color balance, which is known as “global” (or “primary”) grading. Typically this process involves manipulating the colors to remove any color casts or to compensate for contrast or exposure differences between shots.

Primary grading is normally a fairly straightforward process, using a simple set of tools. It can replicate all of the processes that are possible with film laboratory grading processes, which apply different intensities of light or different colored filters to an image to produce the desired result. Primary grading is normally achieved on a per shot basis: a representative frame from each shot is graded, and the results are applied to an entire shot.

images

Figure 8–13   Primary grading makes changes that affect the whole image (see also the Color Insert)

Different grading systems use different methods to produce the desired effect; but using any system allows a wide variety of different looks to be achieved, whether the desired result is film noir or a pop video.

8.2.5 Secondary Grading

In many situations, primary grading isn’t sufficient to produce the desired effect. Especially for detail work, it’s sometimes necessary to apply grading to a single, isolated part of the image.

Many (but not all) grading systems allow selective (or “secondary”) grading. In selective grading, specific parts of a shot are singled out (or “selected”) for specific grading, normally using the same tool set used for primary grading.

There are two main methods for selecting part of a shot: “masking” and “keying.” Masking involves using a shape (i.e., the “mask”) to define a region that the selective grading is applied to. The simplest method is to define a rectangle or ellipse, or some other geometric, vector-based shape as the mask. Many systems allow the creation of “freehand” masks, which the operator defines to mask more complex shapes. Depending on the specific system, the masked area might be displayed with a border, in isolation, or using some other method (e.g., desaturating the unmasked region).

It may also be possible to combine multiple masks or to “invert” them (i.e., turn them inside out). Some paradigms allow masks with a soft (or “feathered”) edge to more smoothly blend the transition between the selected area and the unselected area (particularly when significant grading differences exist between the two).

The process of keying selects a region of a shot based upon its color content. For example, keying purple in a shot selects all the instances in a shot that the color purple appears. Because it’s likely that you want to select similar shades of color as opposed to limiting the selection to pixels containing exactly the same shade, most keyers also have the option to soften or adjust the “tolerance” of the keyed selection or to select a “color range.” Different systems allow the keying of different parameters, such as saturation or luminance, while a more basic system works with just the hue or red, green, and blue channels. To make the system more interactive, the color to be keyed can usually be “sampled” (i.e., picked) directly from the displayed image. Again, the keyed area might be displayed in different ways, depending on the system.

images

Figure 8–14   Areas of an image can be masked for selective work. In this case, only the masked area is shown

With many systems, it’s also possible to combine the two approaches, so that it’s possible to mask a keyed area before applying a grade. Another method is to create masks based upon image properties—for example, the mask could be based upon each pixel’s luminosity strength.

8.2.6 Dynamic Grading

Many shots are dynamic—that is, parts of the scene move, such as when a person walks across a room and cars drive on roads. Furthermore, the camera position may change during the course of the shot, with the view shifting considerably. Anytime any such movement occurs, the grade is affected, particularly if masks have been used to isolate specific elements in the scene. Because masks affect only the area they contain, and not specific elements, the mask could end up acting on the wrong part of the image in a dynamic shot. Rather than enhancing the scene’s visual quality, the mask reveals itself as a patch of color difference and looks like an error.9

images

Figure 8–15   Areas can also be selected by keying colors in the image. In this case, the affected area is shown as a matte

Grading Shots and Frames

For most grading systems, the definition of a shot is somewhat arbitrary, and so a “shot” can be a shot as defined by cut points in an EDL, a single frame or an entire reel, depending upon grading requirements. Most graders prefer to define a shot by cut points in the EDL, and separate them into smaller sections if necessary. This should be done without affecting the way it appears in the conform system. In effect, the colorist should have an independent timeline from the conformed timeline, so that he or she can make edits for grading purposes, without making edits in the actual program.

It sometimes seems as though people working with still digital images have a much more elaborate toolset for editing colors than the motion picture industry does. However, there are good reasons for this. First of all, a trade-off will always exist between available time and detail. A digital image artist might have weeks to fine-tune a single image, while a digital colorist has the same amount of time to do maybe 2000 shots or more than a 100,000 frames. Thus, the digital-grading system tool set is normally designed to reflect that fact.

But perhaps more importantly, there must be continuity in the grading process, or there’s increased risk of adding artifacts into the footage that are only noticeable when the footage is played at speed. Grading moving pictures requires a different approach to manipulating still images. You can’t necessarily repeat a process that works for a still image across a series of frames and produce the desired effect.

The reason why no “magic wand” tool is available, for example, is because if you were to repeat that tool across several frames of moving footage, you would likely get different results (and hence get a different pixel selection) on every frame. (A magic wand tool is used in digital imaging to select an enclosed area of pixels similar to the one already selected.) The net result would be that the selection area would change erratically when played back, even if every frame looked fine when it was viewed in isolation.

images

Figure 8–16   In this instance, the ball has been selectively graded. However, as soon as the ball moves, the grading effect is revealed

images

Figure 8–17   This time, the selective grading has been keyframed so that it moves with the ball

Grading Flashes

One of the more common problems encountered during the quality control process is the issue of grading flashes. The ability to create a multitude of complex, dynamic grades, coupled with the sometimes constant changes that are made to the conformed program makes it easy for the grade applied to one shot to be mistimed and overrun onto an adjacent shot. The result is that the wrong grade is applied to one or more frames of a shot. When the grading is similar across both shots, it can be easy to miss the error, but often the grades are so different that the image appears to “flash,” suddenly changing color. This problem usually can be resolved fairly simply by readjusting the grading timings, but grading flashes often aren’t spotted until after final output. Many digital intermediate facilities opt to create a video playout prior to final output, to check for exactly this sort of problem. This problem may be easier to spot during playback (and is covered further in Chapter 12).

Grading flashes may be used purposefully as a grading technique to simulate lighting effects such as firelight, lightning, or explosions.

Fortunately, most grading systems are equipped with “dynamic grading” options. The basic premise of dynamic grading is that certain grading parameters can be animated, modified over a period of time. So masks can move or change shape during a shot, or the parameters of the grades can be adjusted over time as needed.

Different grading platforms take different approaches to which parameters can be dynamic and how they can be animated. The most common method for animating grading parameters is through the use of keyframes.

Keyframing is a technique pioneered by traditional animators. Traditional (hand-drawn) animation is drawn a frame at a time. Clearly this process is very time consuming, so a new method was devised whereby the lead animators draw only a few of the important, or “key” frames. Other animators could then “tween,” drawing the frames that go in between the key frames. Computer animation techniques adopt a similar work flow. Parameters can be set at specific key frames, and the computer interpolates the values for the inbetween frames. Dozens of different interpolation methods can be used to create the in-between frames, but most of them work by plotting each keyframe as a point on a hypothetical graph and connecting the points to find the in-between frames’ values.

These techniques can be used to create grades for dynamic shots or for special grading effects. Other animation techniques are also possible for more complex situations.

8.2.7 Rotoscoping

Sometimes the only way to achieve a desired effect is to create it a frame at a time. Often the case for complex moving shapes, masks have to be adjusted very accurately to match the part of the image they cover. The operator examines each frame, adjusts the mask to fit the image and then moves on to the next frame. This process is known as “rotoscoping.” It’s a very time-consuming process, and facilities often have an additional operator (or team of operators) who are responsible solely for rotoscoping images.

Rather than using vector-based masks, it may be preferable to output “matte” images to accompany images. A matte image is a gray scale image (it may be embedded in alpha channels as part of the footage) that defines a selection area using pixels. Where the pixel is white, the pixel is considered to be masked. A black pixel indicates a pixel that isn’t to be masked.10 Gray pixels may be partially affected by the grade (e.g., in the feathered area of a vector-based mask), depending on how close the gray value is to black or white. Because mattes are defined by actual pixels rather than vector-based masks, they can be used to define a much more accurate and detailed selection area. However, in most cases, this level of detail is not necessary, and the appropriate selections can be made quickly by using masks. Even if the mask has to follow some highly complex motion, such as a buoy on a stormy sea, it can normally be accommodated through the use of feature tracking.

8.2.8 Feature Tracking

Humans are much more adept at analyzing moving pictures than are computer systems. A person can watch video footage of a sporting event, for instance, and be able to recognize different players and moving objects, regardless of camera movement. Even when the shot is switched between different cameras, we can still follow the action. Computers just can’t do this. All a computer can perceive is a bunch of pixels that may or may not change color between frames, sometimes by a small amount, sometimes by a lot. An entire branch of artificial intelligence research is devoted to the study of “machine vision,” which aims to simulate the ability to recognize objects and shapes when presented with a digital representation of a scene. Part of this research has resulted in the development of feature-tracking software, which can isolate a part of an image, a specific feature, and track its movement across following frames. A signpost in a street scene, a pencil on a desk, or even the corner of a window are all examples of features that are trackable within an image.

Feature tracking works by first selecting a reference zone—an area (usually rectangular) containing the feature that is to be tracked. Next, a search zone is chosen, which is large enough to accommodate the movement of the feature between any two frames. The featuretracking algorithm then examines the next frame’s search zone, looking for a match with the reference zone. The specifics of how matches are identified vary between different algorithms, but once a match is found, the object’s motion relative to the camera can be determined. The feature tracker then advances to the next frame, gradually building up data for the feature’s motion across a range of frames.11 The tracking data that is produced can then be applied to something, such as a grading mask, which then follows the movement of the feature. Successful feature tracking allows moving objects to be selectively graded using masks and other tools without having to resort to rotoscoping techniques for the majority of shots. Tracking data can also be used to stabilize a shot, smoothing out camera motion. Stabilization is covered in Chapter 9.

There are issues with feature tracking, however, and no one tracking algorithm is considered to be perfect. Lighting changes in a shot, perspective shifts caused by features moving closer to or farther from the camera, image rotation, and occlusion by other objects can all produce errors in the tracking data. Much feature-tracking skill involves picking a suitable feature to track; inevitably certain shots have clearer and a greater abundance of suitable features. Many featuretracking systems also can track multiple features, which can increase the accuracy of the data, as well as compensate for rotation and perspective shifts (e.g., by tracking two or more sides of an object). In some instances, it’s even possible to approximate the results of rotoscoping by tracking multiple points and applying the data to a mask. For example, Autodesk’s Lustre system (www.discreet.com) can track multiple points along a mask—for example, the mask can be used to automatically adhere to the shape of a person walking down a corridor toward the camera, something that might otherwise have required rotoscoping.

images

Figure 8–18   A lamppost has been tracked across a number of frames

Preliminary Grading

With some digital intermediate work flows, it may make sense to apply some or all of the grading during acquisition. For example, certain film scanners have the ability to adjust the scanned film’s colors. In many instances, this process can cause problems, because applying color adjustments prior to scanning can reduce the acquired images’ color quality and limit grading options later on. However, some systems can achieve this without compromising quality, in which case, applying some degree of grading can save time during grading, as well as provide superior-quality images.

In these circumstances, it’s advisable that full advantage is taken of such options. It should be considered to be no different than setting the correct exposure on a film camera, or white balancing a video camera when shooting in the first place. Some digital intermediate facilities perform the entire grading process at this stage, meaning that only the conforming and effects processes are needed after scanning, but relinquishing the ability to perform large changes later on.

8.2.9 Changing Colors

For all of the different color models and file formats for storing digital images, there are as many ways to change the color of an image. Because an image can be represented in terms of hue and saturation, these parameters can be adjusted for the whole of an image (or selectively, depending upon the grading system).

Some of the more popular options for changing color are listed below:

Table 8–1 Options for Changing Colors

Operation

Result

Hue

Alters the hue component of a color, effectively rotating the color wheel.

 

Brightness

Affects the overall pixel values, resulting in the brightness being adjusted.

 

Saturation

Alters the saturation of a color.

 

Contrast

Alters the contrast of an image. Increasing contrast makes blacks blacker and whites whiter.

 

Gain

Alters the white point of an image.

 

Pedestal (or lift)

Alters the black point of an image.

 

Gamma

Alters the gamma of the image.

 

Exposure

Emulates an exposure change by altering brightness in a nonlinear way.

 

Tint

Mixes one color with another color, in variable proportions.

 

Channel mixing

Alters the relative strength of each component in color (typically RGB or CYMK).

Many solutions also have the capability to constrain changes to individual channels, so many of the above operations can be applied to a single channel (or part of a single channel), such as to the red, green and blue channels. It may even be possible to apply a gamma correction to a saturation channel for instance.

These operations can usually be adjusted gesturally by moving a slider or spinner on the grading system interface using a mouse or pen and tablet, or by adjusting dials or using trackballs, depending upon the system specifics.12 In other words, these operations can be applied to footage very quickly. However, many of these operations affect all colors equally, and it may be necessary to isolate specific regions without resorting to creating keys or masks.

Grading Video and Film

Many digital-grading systems feature separate modes for grading video and film. In general, when grading video material, the colorist is offered one set of controls (such as gamma, lift, and gain), and another set of controls when grading film (such as exposure, brightness, and contrast). The idea is that different controls are more suitable for grading different material. In addition, film material is typically stored in a logarithmic image format, and so the same parameters applied to both film and video material may produce different results. In the digital intermediate environment, it’s possible to apply any type of alteration to any digital images, so brightness and contrast options could be used on video, or lift and gain on film. The primary reason for using a logarithmic format for film is that it’s more suitable when the eventual result is output back on film.

8.2.10 Shadows, Mid-Tones, and Highlights

One of the simplest ways to divide the tonal range of an image is by separating the shadow (dark), highlight (bright), and mid-tone regions. With many grading systems, this separation is performed automatically based upon the luminosity content of each pixel, usually with some degree of overlap between the regions. With others, the region boundaries may have to be defined in advance.

Using these regions for grading is useful for making detailed changes to an image, without having to resort to time-consuming keying or masking. Many grading decisions are made that can be limited to one of these regions—for example, “the shadows need to be deeper”—and so being able to apply changes to just these reasons can accelerate the entire grading process.

images

Figure 8–19   A gradient added to the sky gives it more texture. Note that the gradient also affects the lamppost, and so it has the same effect as placing a filter over a camera lens (see also the Color Insert)

8.2.11 Gradients

Most grading systems have some sort of tinting parameter to alter the image colors. For example, tinting an image red increases the image’s red component, making it appear redder (or appearing with less cyan, if you want to think about it that way). The specific mechanics of this procedure depend upon the grading system, because increasing an image’s red component may also have the side effect of brightening the overall image.

Tinting an image, as well as performing other operations, requires the operator to specify a color. Rather than tinting an image red, you may want to tint it yellow or purple, for example. But sometimes, it may be useful to apply a range of colors to a color operation. Perhaps you want the image to be tinted blue at the top, where the sky is, and green at the bottom, or perhaps the effect should be more pronounced at a particular corner of the image.

Gradients are a combination of two or more colors. The gradient starts at one color, and then smoothly blends into the other (as many times as there are colors). Many different methods are available for generating gradients (which may be linear or radial and may include transparency) to enable a color to vary in strength. Linear gradients are commonly used on outdoor scenes to add color to skies.13

8.2.12 Grading on a Curve

One of the most powerful methods that grading systems offer for changing the color content of images is the curve-based grading paradigm. With this system, a graph is presented to the operator, who is able to alter the shape of the graph to produce changes.

The simplest version of this is the input-output graph. Some parameter, such as luminosity, is plotted with the input value against the output value. Typically, a straight line is plotted so that the input is mapped exactly to the output (0% input luminosity equals 0% output, 50% input is mapped to 50% output, and so on). Editing the curve can alter the image so that bright areas are darkened or brightened further with or without affecting the image’s other regions. This can work for other parameters, so that, for instance, the saturation of highly saturated regions of color can be adjusted.

It’s also possible to plot a graph of two different parameters. For instance, some grading systems have the ability to plot a graph of saturation against luminosity (among others). This means that the saturation of colors at a given luminosity (or luminosity range) can be increased or decreased as needed which can affect images in drastic or subtle ways with ease, and without visible boundaries where the adjustment begins and ends. Of course, in theory, it’s possible to map most parameters against each other, so that for instance, a pixel’s hue can be changed depending upon an image’s red channel strength. Most systems provide instant feedback, displaying changes as they’re made, which provides a lot of scope for experimentation, which in turn leads to a high level of creative possibilities that may not have been anticipated.

8.2.13 Manipulating Histograms

Video operators often refer to vectorscopes and similar equipment to ensure that images are properly color balanced. The digital equivalent is to use a histogram, which displays an image’s luminosity distribution. Generally speaking, if the luminosity distribution stretches from the minimum possible value to the maximum possible value, then the tonal range is properly balanced.

Rather than making changes to the image color and then referring to the histogram to see how the changes affect it, it’s much more convenient to make changes directly to the histogram and have them reflected in the image. For example, looking at the histogram can sometimes reveal where the peak white should lie, and the histogram can be adjusted to redistribute the pixel values correctly. Even more options may be available, such as the ability to edit histograms for different color spaces, or for individual channels.

8.2.14 Color Matching

Another unique feature of digital color-grading systems is the ability to automatically match two different colors. A computer system can compare the color information between two different color samples (a source and a reference), calculate the difference between the two, and then apply the difference to the entire image needing to be changed, making the source color the same as the reference. Because the changes are applied to the entire image, it can make other areas that were perfectly balanced look wrong however, and so the procedure is usually reserved for images that have a color cast.

Flesh Tones

Some colorists color balance an image by first setting white and black points and then identifying areas that should be neutral (close to 18% gray) and changing the grading parameters until the areas in question meet the required shade. Other colorists like to concentrate on making the flesh tones of people in the image correct, and then adjusting everything else to support this. The reasoning behind this is that audiences tend to “calibrate” their eyes based upon the flesh tones in an image, and so skin that is too green can cause the viewer to perceive a green cast to the entire image, even if the image is otherwise perfectly balanced. Some graders make use of a standard flesh tones color chart (such as those available at www.retouchpro.com) and grade flesh tones in an image to a reference color in the chart.

The most obvious use of this process is to maintain grading continuity across different scenes, by picking specific colors that need to be the same in two different shots, and then matching one to the other. But color matching can be used for a number of different applications throughout the color grading process, such as to set neutral colors, or match colors in a scene to a reference color elsewhere.

Each grading system inevitably has a different method for performing color matching, depending upon the process that’s applied to the image to shift the color components from the source to the reference color. For example, some paradigms may separately adjust the hue, saturation, and luminosity components, while others may make adjustments to the red, green, and blue components of each pixel.

8.2.15 Grading Priority

With the ability of applying many grading operations and processes to each image, the order of the operations becomes important. For example, tinting an image and then reducing the saturation produces different results than reducing the saturation and then tinting the image. To ensure that results of grading operations are somewhat predictable, most grading systems apply operations in a specific order. Some systems make it possible to adjust the priority of different operations, so that higher-priority operations are applied first.

8.2.16 Composite Grading

Regardless of the software used, digital color grading is merely a form of digital compositing. Masks, keys, and trackers are all features found in visual effects compositing systems, and the only difference is that in the digital intermediate environment, those tools are largely streamlined for changing the color of hundreds of shots.

Surface-Plot Grading

Many of the grading operations, such as curve or histogram manipulation, work on two different parameters and are therefore easily displayed as 2D graphs. However, most color models have three or more parameters, which aren’t easily reduced to a 2D graph. So, to plot changes in luminosity, depending upon the hue and saturation component of an image, requires a 3D surface-plot graph. Using such a paradigm, it’s possible to brighten saturated reds exclusively, while darkening saturated blues and desaturated reds.

Even so, some grading systems lack some compositing functions that might otherwise prove useful. Some very effective color effects can be produced by compositing other images (or solid colors or gradients, or even the original images) onto the source images and using different methods to blend the images. For example, a technique long used by digital image artists is to “square” the image, multiplying all the image’s pixel values by themselves. This technique has a similar effect to changing the contrast, and its visual effect is to create the appearance of an increase in the image’s density.

Some systems, such as Da Vinci’s Resolve (www.davsys.com), allow access to the operations being performed on each shot, to enable the creation of very complex grades. For example, a matte can be created that isolates an area for brightening. That same matte can also constrain another parameter, such as a blur effect.

8.3 Grading Procedures

Color grading is used for both practical and creative reasons. Grading can be used in many different situations to improve the visual aspects of a scene or sequence—e.g., drawing attention to specific features, correcting a lighting problem, or defining the mood of a piece.

8.3.1 Continuity Grading

One of the most common tasks of the color grader is to balance two different shots so that no discernable difference in color or tone can be perceived in them. Many factors can cause these differences, some of which have been touched upon already, such as shooting a scene at different times of the day, but regardless of the cause, continuity grading is one of the most effective ways to minimize differences in the color or tone of shots.

Continuity grading involves using global grading to ensure that the overall brightness, contrast, and color balance of each shot are the same. This doesn’t necessarily mean that the same grading parameters must be applied to each shot, but the end result is that every shot is visually consistent.14

Selective grading then adjusts individual elements within a shot so that it better matches other shots. For example, in a sequence featuring a red car driving among green foliage, global grading might balance the overall color, brightness, and contrast between shots; the car and tree elements might be separately isolated to perfectly match them between shots.

The Problem of Background Plates

The digital intermediate process normally works hand in hand with the visual effects process. Occasionally, scanned footage is used to create a visual effects shot, while the conforming system simultaneously uses it as a “stand-in.” In addition, the color grader simultaneously uses the visual effects shot as a “background plate,” to set a grade for most of the image, and the grade setting is eventually tweaked upon receipt of the finished effect.

However, many visual effects departments request graded footage under the assumption that composites can be matched better to such footage. Sadly, this assumption is a myth. In practice, this kind of work flow accentuates differences in calibration between the two different departments and potentially exaggerates image artifacts by repeatedly performing operations on images. A far better option is to composite elements over the ungraded footage, matching it as if it were the final graded version. If required, mattes can be supplied along with the final shot to separate the foreground elements from the background, and then each can be graded separately if required. That way, the potential for degradation of the footage is minimized, while the level of control provided by the digital-grading process is maintained.

Many graders also take advantage of the automated match-grading options provided by some systems, but the use of this feature usually depends on the requirements of specific shots.

8.3.2 Color Retouching

Grading is often used to repair “color damage,” usually caused by the colors on the original material incorrectly recording color information, or by becoming degraded, for example where color has faded on old material (particularly if it was stored incorrectly). Other situations can cause the colors in an image to require such correction. Probably the most common issues concern exposure. Underexposed images tend to be too dark overall, whereas overexposed images tend to be too bright. Fortunately, many mediums, particularly photographic film, have a degree of exposure “latitude,” meaning that detail is recorded even where the image appears to be peak white.

A further complication can arise where certain elements of a shot may each be exposed differently. For example, in a street scene, some welllit buildings may be correctly exposed, while other poorly-lit buildings may be under-exposed, making the scene lack uniformity. Digital grading can help solve exposure issues by darkening overexposed or brightening under-exposed images, both globally and selectively. Some grading systems have controls for modifying the exposure of an image directly, whereas others achieve the same results using a combination of different grading operations.

“Color casts,” where an image is tinted by a particular color, are also common problems in images. Casts may be caused by significantly aged material (photographic film or printed material in particular), which has been degraded by chemical processes—similar to a newspaper turning yellow when it’s left in the sun. Degraded materials also occur when the color temperature of the lighting in a scene doesn’t matching the material—for example, when daylightbalanced film is used to photograph a scene lit by tungsten light, resulting in an image with an orange cast.15 It’s also possible that the scene may have mixed lighting, in which case, some areas of an image may be properly balanced, while others have casts. These situations often are remedied by careful color grading, typically by balancing a region of the image that should be neutral (i.e., gray) or by ensuring the accurate representation of flesh tones, again using global grading to correct overall color casts and selective grading to tackle localized areas.

images

Figure 8–20   Grading can be used to turn a daytime scene into a nighttime one. In this case, lens flare effects have also been added to the light sources (see also the Color Insert)

8.3.3 Day-for-Night

Productions often choose to film scenes that are set at night during daylight hours, a process known as “day-for-night.” There are many reasons for this, mostly practical and economic. Of course, a scene photographed during the day looks completely different than the same scene photographed at night, even when both are properly exposed. Because less light is available at night, our eyes (and cameras) must adapt—that is, they respond differently to light. So nighttime scenes tend to look much less saturated, have denser shadows (deeper blacks), and darker mid-tones. It’s also common to increase the level of blue in a day-for-night image, because we’re more susceptible to blue light under low-light conditions.

Day-for-night grading has been done as a chemical process on films for many years, and exactly the same processes can be replicated digitally, with a greater degree of flexibility. The scene is filmed during the day, and the image is color graded to look as if it was filmed at night—that is, the sunlight is made to look like moonlight, and adjustments are made to various aspects of the scene’s color content to emulate nighttime.

It’s worth noting, however, that practical issues are associated with shooting day-for-night scenes. Certain elements that are visible only during the day (such as the sun and flocks of birds) must be avoided during shooting (or may have to be removed digitally, which can be a time-consuming process), while elements that should be visible at night (such as lit streetlamps) have to be included or added later.

8.3.4 Relighting

Sometimes there isn’t enough time (or money) to light a scene the way the cinematographer would like. Practical issues may be associated with lighting a building in the background with spotlights, for instance, or perhaps the correct gel wasn’t available when the scene was shot.

Selective digital-grading processes enable scenes to effectively be relit, strengthening or diminishing the effects of photographed lighting, while simultaneously adding additional lighting effects not present earlier. Carefully placed masks can be used to brighten a specific region, making it look as if a spotlight was pointed at the area when the scene was originally shot.

images

Figure 8–21   Grading can be used to apply relighting to a scene, making it appear as if the original scene had been lit differently. © 2005 Andrew Francis. (see also the Color Insert)

It’s possible to speed up the onset lighting process by anticipating the grading system’s capabilities and limitations. In particular, backlighting effects may be difficult to reproduce digitally, while adding vertical wall lights might be fairly straightforward. Chapter 14 discusses the planning process for anticipating grading possibilities during production and preproduction.

8.3.5 Enhancement

Photographers, particularly fashion photographers, have been using digital and chemical processes to enhance images for years. Even the best photographer can only replicate reality, but he or she may want to enhance it. Fashion photographs usually are extensively retouched to remove a model’s blemishes or imperfections, and the photographs also undergo any number of color enhancements. Eyes can be artificially brightened, make-up can be softened, colors saturated. With digital grading, many of these same techniques can be applied to moving images, making them look “better” than reality.

8.3.6 Color Suppression

Certain colors sometimes have to be removed, or “suppressed,” from shots, usually for special effects purposes. Especially with commercials or corporate presentations that make use of particular colors in their branding, a photographer may want to suppress other colors, drawing attention to the product.

The digital-grading process allows this effect to be implement with ease, either to a range of colors across an image or to specific colors in a localized area. Digital grading provides the capability of photographing a scene in full color and then changing it to black and white. Such processes can also define a specific “look” for the production, as was done on the HBO miniseries Band of Brothers.

8.3.7 Lab Process Emulation

Chemical film processing has been around for decades now. Although few consumers use them, a number of different processing techniques are available for changing the look of footage.

“Silver retention” (also referred to as “bleach bypass” or “skip bleach”) is a process used during film development, which retains the silver grains normally removed from the emulsion.16 The resulting footage has higher contrast, reduced saturation, stronger highlights, and denser shadows. This process has been used on films such as Seven, 1984, and Saving Private Ryan.

“Cross-processing” film involves developing a particular film stock using chemicals designed for different stocks (e.g., developing a color negative using “E-6” processing rather than the usual “C-41” process. The process changes an image’s contrast and hues.

“Blue-green transfers” swap the green and blue layers during processing resulting in images with different color balances.

All of these can be replicated digitally, either by manually colorgrading shots to emulate the effect, or through the use of specific software algorithms that mimic the chemical processes. For example, the blue-green transfer can be replicated simply by swapping the blue and green channels in an RGB image.

images

Figure 8–22   A simulated bleach-bypass look can be digitally applied to an image (see also the Color Insert)

images

Figure 8–23   The effects of cross-processing an image can be simulated digitally (see also the Color Insert)

Laboratory Accuracy

Compared to chemical grading processes, digital color grading affords a high level of precision. Laboratories usually assume a margin of error of around one printer light for each of the red, green, and blue components, whereas digital systems are accurate to within a fraction of a point. Putting calibration and color space issues aside, on a digitally graded production that is output to film, it’s much more likely that differences in color between the graded image and the projected print are due to the more limited tolerance of chemical processing than the precision of the digital image. Certainly, chemistry can account for all the differences between several prints produced from a single negative.

8.3.8 Printer Lights

Photochemical color grading is achieved by varying the intensity of different colored light sources or by applying colored filters to a film being duplicated. Lab graders specify “printer lights” (or “points”) as incremental grading units, so an image might be made greener by “adding one green printer light,” or darkened by “reducing by one point.” 17 Many film color graders and cinematographers are used to thinking about color differences in terms of printer lights. In addition, many projects that will be output to film will inevitably go through some laboratory grading, to compensate for processing differences between different development batches. For these reasons, it may be useful for the digital-grading system to mimic the effects of applying different printer lights, rather than (or in addition to) grading using a more arbitrary scale, or one based upon the digital image format.

images

Figure 8–24   A blue-green transfer is possible by simply swapping an image’s blue and green components (see also the Color Insert)

8.3.9 Colorization

One of the most impressive (as well as the most complex) capabilities of a digital-grading system is the addition of color to black-and-white images, substituting shades of gray for flesh tones, red roses, green foliage, and so on. As with the restoration of archival footage, this process is very laborious, but the results can be astonishing. Unfortunately, little automation is available to this process, so footage must be colorized through an intensive combination of masking and tinting (using either solid colors or gradients). Most of the time, it’s done without affecting the luminosity information, so that the tonal range remains intact.

images

Figure 8–25   Digital-grading techniques can be used to artificially add color to a black-and-white image (see also the Color Insert)

images

Figure 8–26   False color can be assigned to monochrome digital images, mapping different colors onto different pixel values (see also the Color Insert)

8.3.10 False Color

Another way to add color to black-and-white images is by using false color. This process is most commonly used on infrared imagery to define different regions. In the simplest form, a LUT is generated for the footage, whereby a specific luminosity (gray level) is mapped onto a particular color. So pure black might appear as bright blue, slightly brighter shades as yellow, and so on. Because it can be impossible to predict how the colors will be distributed, and because the end results often bear no resemblance to reality, these techniques are often relegated to scientific applications or special effects. However, it’s possible to apply similar processes to full-color images to alter the results of such “false color” images through digital grading (applied either before or after the false color process is applied).

8.4 Grading Damage

As mentioned previously, grading is an inherently destructive process. Depending upon the source footage, and the amount and type of grading processes, the images might completely “break apart” visually or, more commonly, become subjected to increased levels of noise. Much of the time, these effects aren’t noticeable until viewed under the target conditions (such as projected onto a cinema screen), and they’re often difficult to resolve. Experienced graders know instinctively how different images hold up against different grading processes and which processes to avoid.

However, certain situations are unavoidable. For example, increasing the exposure of heavily underexposed images almost always increases an image’s noise level, making it appear to “buzz” or “crawl” when played back.

Fortunately, the digital intermediate pipeline offers methods to combat this and other types of image degradation, helping to “restore” images to their intended state (these methods are covered in Chapter 9).

Look Management

Several new systems are currently emerging to tackle the issue of look management. With these systems, such as Kodak’s “Look Manager System” (www.kodak.com), grading decisions can be made early on (i.e., “looks” can be defined), without compromising image quality. These grading decisions can be saved and loaded into the grading system during post-production to speed up the grading process.

However, the practicalities of such systems remain to be seen. First of all, one could argue that plenty of other aspects of the intermediate pipeline must be dealt with, rather than designing color schemes that may or may not be used later. Second, although many grading systems store grading information as “grading metadata,” separate from the image data; they are largely incompatible. For look management systems to be successful, a common grading metadata standard has to be designed, one that is independent of the grading system. Finally, the largest problem may still be ensuring that every system in the production is properly calibrated; otherwise, the look management system becomes redundant.

8.5 Summary

Many factors determine how we perceive color, and even the same color might be displayed differently on different devices. To minimize this effect, a digital-grading system should be calibrated to some standard viewing environment, as objectively as possible.

Color grading can be used to accurately replicate realistic colors, or to deviate from them, in order to create aesthetic images. Digital-grading systems offer a number of tools for controlling colors in an image, making overall changes to the content of an image, or isolating specific regions for more detailed work. Changes can also be made dynamically, with the properties of an effect being adjusted over time.

Grading can be used for several tasks, from ensuring color continuity across a number of shots, to changing a day scene into a night scene, and even to add color to a black-and-white image.

1 The eye has a nonlinear response to light. We perceive a light source that only outputs 18% of the light of a second light to be half as bright.

2 The actual measurement of luminance is biased toward the human visual system’sresponse to light.

3 This can be a problem with old or low-quality video tapes, and ones that have been through several format conversions.

4 The term “correlated color temperature” refers to the physical effect of heating a “black body” until it emits light. Different temperatures produce different colored light, and the relationship between temperature and colour is equivalent to the relationship between different light sources. So the color of a particular light source is not necessarily of a specific temperature, rather it can be correlated against this heated black body effect.

5 The exception to this is when a “preliminary grade” is performed during scanning.

6 The true test for ensuring that the rest of the pipeline doesn’t impact the quality of the material is to output it directly after acquisition, without applying any colorgrading effects. For film material, this may mean scanning directly to a Cineon Printing Density (CPD) color space, depending upon the characteristics of the film recorder used for output.

7 The other issue is that if the grading list files or database are deleted or damaged somehow, all the grading work is lost.

8 One of the benefits of working with such a control desk means that you can keep your eye on the image, rather than on the interface.

9 This is also one of the reasons why rescanning footage should be avoided, because each scan may cause the image contents to be in slightly different places, particularly with scanners that aren’t pin registered

10 Sometimes the reverse is true, depending upon the specifics of the grading system

11 Good tracking algorithms can track features on a “subpixel” basis, meaning that movement is more accurately recorded.

12 Most colorists prefer to work with more tactile interfaces, especially those that allow them to watch the image, rather than the interface.

13 In some applications, gradients are created by generating a mask with a soft edge, rather than a separate gradient entity

14 In many instances, different shots require exactly the same grading parameters as a shot previously graded, particularly shots cut from the same take. Many grading systems accelerate this process by allowing grading parameters to be copied between two shots or even linked, so that changes to one shot affect the others.

15 This also applies to digital and video cameras, which require the white balance to be set to specify the type of lighting.

16 There are a number of different techniques for doing so, but the end result is normally the same.

17 Printer lights are a logarithmic unit, reflecting the response of photographic film to light, so doubling the number of lights won’t necessarily result in doubling the image’s brightness.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset