Chapter 11. 32-Bit HDR Compositing and Color Management

32-Bit HDR Compositing and Color Management
 

True realism consists in revealing the surprising things which habit keeps covered and prevents us from seeing.

 
 --Jean Cocteau (French director, painter, playwright, and poet)

Whether you are directly aware of limitations or not, you no doubt realize that there are limits to the ways images are processed and displayed on a computer. Your monitor likely only displays 8 bits of color per channel, and while its size (in pixel dimensions) has steadily increased over the last few years, this color depth limitation has hardly budged.

You may also be aware that although an After Effects project, by default, operates in the same limited 8-bit-per-channel mode as your monitor, this is hardly the optimal way to create an image. Other modes, models, and methods for color are available, including high-bit depths, alternate color spaces, and color management, and few topics in After Effects generate as much curiosity or confusion as these. Each of the features detailed here improves upon the standard digital color model you know best, but at the cost of requiring better understanding on your part.

In After Effects CS4 the process centers around Color Management, whose name would seem to imply that it is an automated process to manage colors for you, when in fact it is a complex set of tools allowing (even requiring) you to effectively manage color.

On the other hand, 32-bit High Dynamic Range (HDR) compositing is routinely ignored by artists who could benefit from it, despite that it remains uncommon for source files to contain over-range color data, which are pixel values too bright for your monitor to display.

Film can and typically does contain these over-range color values. These are most often brought into After Effects as 10-bit log Cineon or DPX files, and importing, converting, and writing this format requires a bit of special knowledge. It’s an elegant and highly standardized system that has relevance even when you’re working with the most up-to-date, high-end digital cameras.

Color Management: Why Bother?

It’s normal to wish Color Management would simply go away. So many of us have produced footage with After Effects for years and devised our own systems to manage color through each stage of production. We’ve assumed, naively perhaps, that a pixel is a pixel and as long as we control the RGB value of that pixel, we maintain control over the appearance of the image.

The problem with this way of thinking is that it’s tied to the monitor. The way a given RGB pixel looks on your monitor is somewhat arbitrary—I’m typing this on a laptop, and I know that its monitor has higher contrast than my desktop monitors, one of which has a bluer cast than the other if I don’t adjust them to match. Not only that, the way that color operates on your monitor is nothing like the way it works in the real world, or even in a camera. Not only is the dynamic range far more limited, but also an arbitrary gamma adjustment is required to make images look right.

Color itself is not arbitrary. Although color is a completely human phenomenon—“color” as such does not exist except in our vision system and that of other higher primates—it is the result of measurable natural phenomena. Because the qualities of a given color are measurable to a large degree, a system is evolving to measure them, and Adobe is attempting to spearhead the progress of that system with its Color Management features.

Completely Optional

The Color Management feature set in After Effects is completely optional and disabled by default. Its features become necessary in cases including, but not necessarily limited to, the following:

  • A project relies on a color managed file (with an embedded ICC Profile). For example, a client provides an image or clip with specific managed color settings and requires that the output match.

  • A project will benefit from a linearized 1.0 gamma working space. If that means nothing to you, read on; this is the chapter that explains it.

  • Output will be displayed in some manner that’s not directly available on your system.

  • A project is shared and the color is adjusted on a variety of workstations, each with a calibrated monitor. The goal is for color corrections made on a given workstation to match once the shot moves on from that workstation.

To achieve these goals requires that some old rules be broken and new ones established.

Related and Mandatory

Other changes introduced in After Effects CS4 seem tied to Color Management but come into play even if you never enable it:

  • A video file in a DV or other Y’CrCb (YUV) format requires (and receives) automatic color interpretation upon import into After Effects, applying settings that would previously have been up to you to add. This is done by MediaCore, a little known Adobe application that runs invisibly behind the scenes of Adobe video applications (see “Input Profile and MediaCore,” later in this chapter).

  • QuickTime gamma settings in general have become something of a moving target as Apple adds its own form of color management, whose effects vary from codec to codec. As a result, there are situations in which imported and rendered QuickTimes won’t look right. This is not the fault of Color Management, although you can use the feature set to correct the problems that come up (see “QuickTime,” below).

  • Linear blending (using a 1.0 gamma only for pixel-blending operations without converting all images to linear gamma) is possible without setting a linearized Project Working Space or enabling Color Management (see the section “Blend Colors Using 1.0 Gamma”).

Because these issues also affect how color is managed, they tend to get lumped in with the Color Management system when in fact they can be unique from it.

A Pixel’s Journey Through After Effects

Join me now as we follow color through After Effects, noting the various features that can affect its appearance or even its very identity—its RGB value. Although it’s not mandatory, it’s best to increase that pixel’s color flexibility and accuracy, warming it up to get it ready for the trip, by raising project bit depth above 8 bpc. Here’s why.

16-Bit-Per-Channel Composites

A 16-bit-per-channel color was added to After Effects 5.0 for one basic reason: to eliminate color quantization, most commonly seen in the form of banding where subtle gradients and other threshold regions appear in an image. In 16 bpc mode there are 128 extra gradations between each R, G, B, and A value contained in the familiar 8 bpc mode.

Notes

Notes

Many but not all effects and plug-ins support 16 bpc color. To discern which ones do, with your project set to the target bit depth (16 bpc in this case), choose Show 16 bpc-Capable Effects Only from the Effects & Presets panel menu. Effects that are only 8 bpc aren’t off-limits; you should just be careful to place them where they are least likely to cause banding—typically either at the beginning or the end of the image pipeline.

Those increments are typically too fine for your eye to distinguish (or your monitor to display), but your eye easily notices banding, and when you start to make multiple adjustments to 8 bpc images, as may be required by Color Management features, banding is bound to appear in edge thresholds and shadows, making the image look bad.

You can raise color depth in your project by either Alt/Option-clicking on the color depth setting at the bottom of the Project panel or via the Depth menu in File > Project Settings. The resulting performance hit typically isn’t as bad as you might think.

Most digital artists prefer 8 bpc colors because we’re so used to them, but switching to 16 bpc mode doesn’t mean you’re stuck with incomprehensible pixel values of 32768, 0, 0 for pure red or 16384, 16384, 16384 for middle gray. In the panel menu of the Info panel, choose whichever numerical color representation works for you; this setting is used everywhere in the application, including the Adobe color picker (Figure 11.1). The following sections use 8 bpc values despite referring to 16 bpc projects.

Love working 16 bpc but hate analyzing 16-bit values that go up to 32768? Choose 8 bpc in the Info Panel Menu to display familiar 0 to 255 values. Or better yet, use Decimal values in all bit depths.

Figure 11.1. Love working 16 bpc but hate analyzing 16-bit values that go up to 32768? Choose 8 bpc in the Info Panel Menu to display familiar 0 to 255 values. Or better yet, use Decimal values in all bit depths.

Monitor Calibration

Sometimes it becomes obvious that RGB values alone cannot describe pure colors; if you don’t know what I’m talking about, connect a still-working decade-old CRT monitor to your system and see how it looks. You can imagine that a 255,255,255 white would be likely to look blue or yellow.

Assuming your monitor isn’t that far out of whack, third-party color calibration hardware and software can be used to generate a profile that is then stored and set as a system preference. This monitor profile accomplishes two things:

  • Defines a color space for compositing unique from what is properly called monitor color space.

  • Offers control over the color appearance of the composition. Each pixel has not only an RGB value but also an actual precise and absolute color.

In other words, the color values and how they interrelate change, as does the method used to display them.

Color Management: Disabled by Default

Import a file edited in another Adobe application such as Photoshop or Lightroom and it likely contains an embedded ICC color profile. This profile can tell After Effects how the colors should be interpreted and appear, instead of remaining as raw electrical signals.

Notes

Notes

Is there an external broadcast monitor attached to your system (set as an Output Device in Preferences > Video Preview)? Color Management settings do not apply to that device.

A file called sanityCheck.tif can be found on the book’s disc; it contains data and color gradients that will help you understand linear color later in the chapter. Import this file into After Effects and choose File > Interpret Footage > Main (Ctrl+F/Cmd+F, or context-click instead). The Interpret Footage includes a Color Management tab.

Figure 11.2 shows how this tab appears with the default settings. Assign Profile is grayed out because, as the Description text explains, Color Management is off and color values are not converted. Color Management is enabled as soon as you assign a working space.

Until Color Management is enabled for the entire project, the embedded profile of a source image is not displayed in the Project panel, nor is it used.

Figure 11.2. Until Color Management is enabled for the entire project, the embedded profile of a source image is not displayed in the Project panel, nor is it used.

Project Working Space

The proper choice of a working space is the one that typically matches the “output intent,” the color space corresponding to the target device. The Working Space menu containing all possible choices is located in File > Project Settings (Ctrl+Alt+K/Cmd+Opt+K, or just click where you see the “bpc” setting along the bottom of the Project panel).

Profiles above the line are considered by Adobe to be the most likely candidates. Those below might include profiles used by such unlikely output devices as a color printer (Figure 11.3).

For better or worse, all of the color profiles active on the local system are listed as Working Space candidates, even such unlikely targets as the office color printer.

Figure 11.3. For better or worse, all of the color profiles active on the local system are listed as Working Space candidates, even such unlikely targets as the office color printer.

By default, Working Space is set to None (and thus Color Management is off). Choose a Working Space from the menu and Color Management is enabled, triggering the following:

  • Assigned profiles in imported files are activated and displayed atop the Project panel when it’s selected.

  • Imported files with no assigned profile are assumed to have a profile of sRGB IEC61966-2.1, hereafter referred to as simply sRGB.

  • Actual RGB values can and will change to maintain consistent color values.

Choose wisely; it’s a bad idea to change working space mid-project once you’ve begun adjusting color, because it will change the fundamental look of source footage and comps.

Okay, so it’s a cop-out to say “choose wisely” and not give actual advice. There’s a rather large document, included on the disc and also available at www.adobe.com/devnet/aftereffects/articles/color_management_workflow.html, that includes a table itemizing each and every profile included in After Effects.

We can just forego that for the moment in favor of a concise summary:

  • For HD display, HDTV (Rec. 709) is Adobe-sanctioned, but sRGB is similar and more of a reliable standard.

  • For monitor playback, sRGB is generally most suitable.

  • SDTV NTSC or SDTV PAL theoretically let you forego a preview broadcast monitor, although it’s also possible to simulate these formats without working in them (“Display Management and Output Simulation,” below).

  • Film output is an exception and is discussed later in this chapter.

To say that a profile is “reliable” is like saying that a particular brand of car is reliable: It has been taken through a series of situations and has caused the least problems under various types of duress. I realize that with color management allegedly being so scientific and all, this sounds squirrelly, but it’s just the reality of an infinite variety of images heading for an infinite variety of viewing environments. There’s the scientifically tested reliability of the car and then there are real-world driving conditions.

Notes

Notes

A small yellow + sign appears in the middle of the Show Channel icon to indicate that Display Color Management is active (Figure 11.4).

When Use Display Color Management is active in the View menu (the default after you set a working space) this icon adds a yellow plus symbol at its center.

Figure 11.4. When Use Display Color Management is active in the View menu (the default after you set a working space) this icon adds a yellow plus symbol at its center.

Gamut describes the range of possible saturation, keeping in mind that any pixel can be described by its hue, saturation, and brightness as accurately as its red, green, and blue. The range of hues accessible to human vision is rather fixed, but the amount of brightness and saturation possible is not—32 bpc HDR addresses both. The idea is to match, not outdo (and definitely not to undershoot) the gamut of the target.

Working spaces change RGB values. Open sanityCheck.tif in a viewer and move your cursor over the little bright red square; its values are 255, 0, 0. Now change the working space to ProPhoto RGB. Nothing looks different, but the values are now 179, 20, 26, meaning that with this wider gamut, color values do not need to be nearly as large in order to appear just as saturated, and there is headroom for far more saturation. You just need a medium capable of displaying the more saturated red in order to see it properly with this gamut. Many film stocks can do it, and your monitor cannot.

Input Profile and MediaCore

If an 8 bpc image file has no embedded profile, sRGB is assigned (Figure 11.5), which is close to monitor color space. This allows the file to be color managed, to preserve its appearance even in a different color space. Toggle Preserve RGB in the Color Management tab and the appearance of that image can change with the working space—not, generally, what you want, which is why After Effects goes ahead and assigns its best guess.

Any imported image with no color profile gets sRGB by default to bring it into the color management pipeline. You can override this setting in Interpret Footage > Color Management.

Figure 11.5. Any imported image with no color profile gets sRGB by default to bring it into the color management pipeline. You can override this setting in Interpret Footage > Color Management.

Video formats (QuickTime being by far the most common) don’t accept color profiles, but they do require color interpretation based on embedded data. After Effects uses an Adobe application called MediaCore to interpret these files automatically; it operates completely behind the scenes, invisible to you.

Notes

Notes

In many ways, MediaCore’s automation is a good thing. After Effects 7.0 had a little checkbox at the bottom of Interpret Footage labeled “Expand ITU-R 601 Luma Levels” that obligated you to manage incoming luminance range. With MediaCore, however, you lose the ability to override the setting. Expanded values above 235 and below 16 are pushed out of range, recoverable only in 32 bpc mode.

You know that MediaCore is handling a file when that file has Y’CbCr in the Embedded Profile info, including DV and YUV format files. In such a case the Color Management tab is completely grayed out, so there is no option to override the embedded settings.

Display Management and Output Simulation

And in the middle of all of this great responsibility comes a genuinely fun feature, Output Simulation, which simulates how your comp will look on a particular device. The “device” in question can include film projection, and the process of representing that environment on your monitor works better than you might expect.

Suppose you need to know how an image (Figure 11.6) would appear on NTSC and PAL standard definition television, and you don’t have a standard def broadcast monitor to preview either of those formats.

The source image (courtesy of Michael Scott) is adjusted precisely in a color managed project.

Figure 11.6. The source image (courtesy of Michael Scott) is adjusted precisely in a color managed project.

No problem. With the viewer selected choose View > Simulate Output > SDTV NTSC. Here’s what happens:

  • The appearance of the footage changes to match the output simulation. The viewer displays After Effects’ simulation of an NTSC monitor.

  • Unlike when you change the working space, color values do not change due to output simulation.

  • The image is actually assigned two separate color profiles in sequence: a scene-referred profile to simulate the output profile you would use for NTSC (SDTV NTSC) and a second profile that actually simulates the television monitor that would then display that rendered output (SMPTE-C). To see what these settings are, and to customize them, choose View > Simulate Output > Custom to open the Custom Output Simulation dialog (Figure 11.7).

    This Custom Output Simulation dialog now nicely shows the four stages from source RGB image to the monitor. The middle two stages are those set by Output Simulation; the first occurs on import, the final when the image is displayed.

    Figure 11.7. This Custom Output Simulation dialog now nicely shows the four stages from source RGB image to the monitor. The middle two stages are those set by Output Simulation; the first occurs on import, the final when the image is displayed.

    Close-Up: Interpretation Rules

    Close-Up: Interpretation Rules

    A file on your system named interpretation rules.txt defines how files are automatically interpreted as they are imported into After Effects. To change anything in this file, you should be something of a hacker, able to look at a line like

    # *, *, *, "sDPX", * ~ *, *, *, *, "ginp", *

    and, by examining surrounding lines and comments, figure out that this line is commented out (with the # sign at the beginning), and that the next to last argument, "ginp" in quotes, assigns the Kodak 5218 film profile if the file type corresponds with the fourth argument, "sDPX". If this makes you squirm, don’t touch it, call a nerd. In this case, removing the # sign at the beginning would enable this rule so that DPX files would be assigned a Kodak 5218 profile (without it, they are assigned to the working space).

    If this isn’t your cup of tea, as it won’t be for most artists, leave it to someone willing to muck around with this stuff.

This gets really fun with simulations of projected film (Figure 11.8)—not only the print stock but the appearance of projection is simulated, allowing an artist to work directly on the projected look of a shot instead of waiting until it is filmed out and projected.

The result of Output Simulation shows bluer highlights, deeper blacks (which may not read on the printed page) and a less saturated red dress. If you wanted the image to appear different when projected, you would now further adjust it with this view active. It might then look “wrong” with Output Simulation off, but “right” when finally filmed out and projected.

Figure 11.8. The result of Output Simulation shows bluer highlights, deeper blacks (which may not read on the printed page) and a less saturated red dress. If you wanted the image to appear different when projected, you would now further adjust it with this view active. It might then look “wrong” with Output Simulation off, but “right” when finally filmed out and projected.

Tip

Tip

Having trouble with View > Simulate Output appearing grayed out? Make sure a viewer window is active when you set it; it operates on a per-viewer basis.

Here’s a summary of what is happening to the source image in the example project:

  1. The source image is interpreted on import (on the Footage Settings > Color Management tab) according to its Working Space setting.

  2. The image is transformed to the Project Working Space; its color values will change to preserve its appearance.

  3. With View > Simulate Output and any profile selected

    1. Color values are transformed to the specified Output Profile.

    2. Color appearance (but not actual values) is transformed to a specified Simulation Profile.

  4. With View > Display Color Management enabled (required for step 3) color appearance (but not actual values) is transformed to the Monitor Profile (the one that lives in system settings, that you created when you calibrated your monitor, remember?)

Notes

Notes

In Photoshop, there is no Project Working Space, only the document Working Space, because there are no projects (no need to accommodate multiple sources together in a single nondestructive project).

That leaves output, which relies only on steps 1 and 2. The others are only for previewing, although you may wish to render an output simulation (to show the filmed-out look on a video display in dailies, for example). To replicate the two-stage color conversion of output simulation:

  1. Apply the Color Profile Converter effect, and match the Output Profile setting to the one listed under View > Simulate Output > Custom. Change the Intent setting to Absolute Colorimetric.

  2. Set a second Color Profile Converter effect, and match the Input Profile setting to the Simulation Profile under View > Simulate Output > Custom (leaving Intent as the default Relative Colorimetric).

The Output Profile in the Render Queue then should match the intended display device.

Now let’s leave simulation behind and look at what happens when you try to preserve actual colors in rendered output. (Which is, after all, the whole point, right?)

Output Profile

By default, After Effects uses Working Space as the Output Profile, usually the right choice. Place the comp in the Render Queue and open the Output Module; on the Color Management tab you can select a different profile to apply on output. The pipeline from the last section now adds a third step to the first two:

  1. The source image is interpreted on import (on the Footage Settings > Color Management tab).

  2. The image is transformed to the working space; its color values will change to preserve its appearance.

  3. The image is transformed to the output profile specified in Output Module Settings > Color Management.

If the profile in step 3 is different from that of step 2, color values will change to preserve color appearance. If the output format supports embedded ICC profiles (presumably a still image format such as TIFF or PSD), then a profile will be embedded so that any other application with color management (presumably an Adobe application such as Photoshop or Illustrator) will continue to preserve those colors.

In the real world, of course, rendered output is probably destined to a device or format that doesn’t support color management and embedded profiles. That’s okay, except in the case of QuickTime, which may further change the appearance of the file, almost guaranteeing that the output won’t match your composition without special handling.

QuickTime

QuickTime continues to have special issues of its own separate from, but related to Adobe’s color management. Because Apple constantly revises QuickTime and the spec has been in some flux, the issues particular to the version of QuickTime at this writing (7.5.5) and how After Effects handles it may continue to evolve.

The current challenge is that Apple has begun implementing its own form of color management without sharing the specification publicly or letting anyone know when it changes. The gamma of QuickTime files can be specifically tagged, and the tag is then interpreted uniquely by each codec, so files with Photo-JPEG compression have a different gamma than files with H.264 compression. Even files with the default Animation setting, which are effectively uncompressed and assumedly neutral, display an altered gamma, and at this writing, that gamma will display differently depending on which application is displaying it. Gamma handling is not even consistent among Apple video applications.

The Match Legacy After Effects QuickTime Gamma Adjustments toggle in Project Settings is not only the longest-titled checkbox in the entire application, it is an option you should not need, in theory at least, unless you’ve opened up an old 7.0 (or earlier) project, or you need a Composition to match what you see in QuickTime Player.

However, many of us deliver client review files as QuickTime movies, so your best bet is to enable Color Management for any project intended to output QuickTime video. The option to disable the Match Legacy toggle is reserved for cases in which that approach doesn’t work; these do unfortunately crop up and remain a moving target as new versions of QuickTime are released, further revising the standard.

To Bypass Color Management

Headaches like that make many artists long for the simpler days of After Effects 7.0 and opt to avoid Color Management altogether, or to use it only selectively. To completely disable the feature and return to 7.0 behavior:

  1. In Project Settings, set Working Space to None (as it is by default).

  2. Enable Match Legacy After Effects QuickTime Gamma Adjustments.

Being more selective about how color management is applied—to take advantage of some features while leaving others disabled for clarity—is really tricky and tends to stump some pretty smart users. Here are a couple of final tips that may nonetheless help:

  • To disable a profile for incoming footage, check Preserve RGB in Interpret Footage (Color Management tab). No attempt will be made to preserve the appearance of that clip.

  • To change the behavior causing untagged footage to be tagged with an sRGB profile, in interpretation rules.txt find this line

    # soft rule: tag all untagged footage with an sRGB profile
    *, *, *, *, * ~ *, *, *, *, "sRGB", *

    and add a # at the beginning of the second line to assign no profile, or change "sRGB" to a different format (options listed in the comments at the top of the file).

  • To prevent your display profile from being factored in, disable View > Use Display Color Management and the pixels are sent straight to the display.

  • To prevent any file from being color managed, check Preserve RGB in Output Module Settings (Color Management tab).

Note that any of the preceding tips may lead to unintended consequences. Leaving a working space enabled and disabling specific features is tricky and potentially dangerous to your health and sanity. Add your own further disclaimers here.

Film and Dynamic Range

The previous section showed how color benefits from precision and flexibility. The precision is derived with the steps just discussed; flexibility is the result of having a wide dynamic range, because there is a far wider range of color and light levels in the physical world than can be represented on your 8-bit-per-channel display.

However, there is more to color flexibility than toggling 16 bpc in order to avoid banding, or even color management, and there is an analog image medium that is capable of going far beyond 16 bpc color, and even a file format capable of representing it.

Film and Cineon

To paraphrase Mark Twain, reports of film’s demise have been exaggerated; not only that, but new formats make use of tried and true filmic standards. Here’s a look at the film process and the digital files on which it relies.

After film has been shot, the negative is developed, and shots destined for digital effects work are scanned frame by frame. During this, the Telecine process, some initial color decisions are made before the frames are output as a numbered sequence of Cineon files, named after Kodak’s now-defunct film compositing system. Both Cineon files and the related format, DPX, store pixels uncompressed at 10 bits per channel. Scanners are usually capable of scanning 4 K plates, and these have become more popular for visual effects usage, although many still elect to scan at half resolution, creating 2 K frames around 2048 by 1536 pixels and weighing in at almost 13 MB.

Working with Cineon Files

Because the process of shooting and scanning film is pretty expensive, almost all Cineon files ever created are the property of some Hollywood studio and unavailable to the general public. The best known free Cineon file is Kodak’s original test image, affectionately referred to as Marcie (Figure 11.9) and available from Kodak’s Web site (www.kodak.com/US/en/motion/-support/dlad/) or the book’s disc. To get a feel for working with film, drop the file called dlad_2048X1556.cin into After Effects, which imports Cineon files just fine.

This universal sample image has been converted from film of a bygone era to Cineon format found on the book’s disc.

Figure 11.9. This universal sample image has been converted from film of a bygone era to Cineon format found on the book’s disc.

The first thing you’ll notice about Marcie is that she looks funny, and not just because this photo dates back to the ’80s. Cineon files are encoded in something called log color space. To make Marcie look more natural, open the Interpret Footage dialog, select the Color Management tab, click Cineon Settings and choose the Over Range preset (instead of the default Full Range). Ah, that looks better; the log image is now converted to the monitor’s color space.

Notes

Notes

Also included on the book’s disc is a Cineon sequence from the RED Camera (and courtesy of fxphd.com), showing off that digital camera’s dynamic range and overall image quality. This is one example of Cineon format that is remaining viable with digital source.

It would seem natural to convert Cineon files to the monitor’s color space, work normally, and then convert the end result back to log, but to do so would be to throw away valuable data. Try this: Apply the Cineon Converter effect and switch the Conversion Type from Linear to Log. This is a preview of how the file would be written on output back to a Cineon log file. Upon further examination of this conversion, you see a problem: in an 8 bpc (or even 16 bpc) project, the bright details in Marcie’s hair don’t survive the trip (Figure 11.10).

When you convert an image from log space (left) to linear (center) and then back to log (right), the brightest details are lost.

Figure 11.10. When you convert an image from log space (left) to linear (center) and then back to log (right), the brightest details are lost.

What’s going on with this mystical Cineon file and its log color space that makes it so hard to deal with? And more importantly, why? Well, it turns out that the engineers at Kodak know a thing or two about film and have made no decisions lightly. But to properly answer the question, it’s necessary to discuss some basic principles of photography and light.

Notes

Notes

As becomes evident later in the chapter, the choice of the term “linear” as an alternative to “log” space for Cineon Converter is unfortunate, because “linear” specifically means neutral 1.0 gamma; what Cineon Converter calls “linear” is in fact gamma encoded.

Dynamic Range

The pictures shown in Figure 11.11 were taken in sequence from a roof on a winter morning. Anyone who has ever tried to photograph a sunrise or sunset with a digital camera should immediately recognize the problem at hand. With a standard exposure, the sky comes in beautifully, but foreground houses are nearly black. Using longer exposures you can bring the houses up, but by the time they are looking good the sky is completely blown out.

Different exposures when recording the same scene clearly produce widely varying results.

Figure 11.11. Different exposures when recording the same scene clearly produce widely varying results.

The limiting factor here is the digital camera’s small dynamic range, which is the difference between the brightest and darkest things that can be captured in the same image. An outdoor scene has a wide array of brightnesses, but any digital device can read only a slice of them. You can change exposure to capture different ranges, but the size of the slice is fixed.

Our eyes have a much larger dynamic range and our brains have a wide array of perceptual tricks, so in real life the houses and sky are both seen easily. But even eyes have limits, such as when you try to see someone behind a bright spotlight or use a laptop computer in the sun. The spotlight has not made the person behind any darker, but when eyes adjust to bright lights (as they must to avoid injury), dark things fall out of range and simply appear black.

White on a monitor just isn’t very bright, which is one reason we work in dim rooms with the blinds pulled down. When you try to represent the bright sky on a dim monitor, everything else in the image has to scale down in proportion. Even when a digital camera can capture extra dynamic range, your monitor must compress it in order to display it.

A standard 8-bit computer image uses values 0 to 255 to represent RGB pixels. If you record a value above 255—say 285 or 310—that represents a pixel beyond the monitor’s dynamic range, brighter than white or overbright. Because 8-bit pixels can’t actually go above 255, overbright information is stored as floating point decimals where 0.0 is black and 1.0 is white. Because floating point numbers are virtually unbounded, 0.75, 7.5, or 750.0 are all acceptable values, even though everything above 1.0 will clip to white on the monitor (Figure 11.12).

Monitor white represents the upper limit for 8-bit and 16-bit pixels, while floating point can go arbitrarily higher (depending on format) or lower; the range also extends below absolute black, 0.0—values that are theoretical and not part of the world you see (unless you’re in outer space, staring into a black hole).

Figure 11.12. Monitor white represents the upper limit for 8-bit and 16-bit pixels, while floating point can go arbitrarily higher (depending on format) or lower; the range also extends below absolute black, 0.0—values that are theoretical and not part of the world you see (unless you’re in outer space, staring into a black hole).

In recent years, techniques have emerged to create HDR images from a series of exposures—floating point files that contain all light information from a scene (Figure 11.13). The best-known paper on the subject was published by Malik and Debevec at SIGGRAPH ’97 (www.debevec.org has details). In successive exposures, values that remain within range can be compared to describe how the camera is responding to different levels of light. That information allows a computer to connect bright areas in the scene to the darker ones and calculate accurate floating point pixel values that combine detail from each exposure.

Consider the floating point pixel values for this HDR image; they relate to one another proportionally, and continue to do so whether the image is brightened or darkened, because the values do not need to clip at 1.0.

Figure 11.13. Consider the floating point pixel values for this HDR image; they relate to one another proportionally, and continue to do so whether the image is brightened or darkened, because the values do not need to clip at 1.0.

But with all the excitement surrounding HDR imaging and improvements in the dynamic range of video cameras, many forget that for decades there has been another medium available for capturing dynamic range far beyond what a computer monitor can display or a digital camera can capture.

Notes

Notes

Photoshop’s Merge to HDR feature allows you to create your own HDR images from a series of locked-off photos at varied exposures.

That medium is film.

Cineon Log Space

A film negative gets its name because areas exposed to light ultimately become dark and opaque, and areas unexposed are made transparent during developing. Light makes dark. Hence, negative.

Dark is a relative term here. A white piece of paper makes a nice dark splotch on the negative, but a lightbulb darkens the film even more, and a photograph of the sun causes the negative to turn out darker still. By not completely exposing to even bright lights, the negative is able to capture the differences between bright highlights and really bright highlights. Film, the original image capture medium, has always been high dynamic range.

If you were to graph the increase in film “density” as increasing amounts of light expose it, you’d get something like Figure 11.14. In math, this is referred to as a logarithmic curve. I’ll get back to this in a moment.

Graph the darkening (density) of film as increasing amounts of light expose it and you get a logarithmic curve.

Figure 11.14. Graph the darkening (density) of film as increasing amounts of light expose it and you get a logarithmic curve.

Digital Film

If a monitor’s maximum brightness is considered to be 1.0, the brightest value film can represent is officially considered by Kodak to be 13.53 (although using the more efficient ICC color conversion, outlined later in the chapter, reveals brightness values above 70). Note this only applies to a film negative that is exposed by light in the world as opposed to a film positive, which is limited by the brightness of a projector bulb, and is therefore not really considered high dynamic range. A Telecine captures the entire range of each frame and stores the frames as a sequence of 10-bit Cineon files. Those extra two bits mean that Cineon pixel values can range from 0 to 1023 instead of the 0 to 255 in 8-bit files.

Having four times as many values to work with in a Cineon file helps, but considering you have 13.53 times the range to record, care must be taken in encoding those values. The most obvious way to store all that light would simply be to evenly squeeze 0.0 to 13.53 into the 0 to 1023 range. The problem with this solution is that it would only leave 75 code values for the all-important 0.0 to 1.0 range, the same as allocated to the range 10.0 to 11.0, which you are far less interested in representing with much accuracy. Your eye can barely tell the difference between two highlights that bright—it certainly doesn’t need 75 brightness variations between them.

A proper way to encode light on film would quickly fill up the usable values with the most important 0.0 to 1.0 light and then leave space left over for the rest of the negative’s range. Fortunately, the film negative itself with its logarithmic response behaves just this way.

Cineon files are often said to be stored in log color space. Actually it is the negative that uses a log response curve and the file is simply storing the negative’s density at each pixel. In any case, the graph in Figure 11.15 describes how light exposes a negative and is encoded into Cineon color values according to Kodak, creators of the format.

Kodak’s Cineon log encoding is also expressed as a logarithmic curve, with labels for the visible black and white points that correspond to 0 and 255 in normal 8-bit pixel values.

Figure 11.15. Kodak’s Cineon log encoding is also expressed as a logarithmic curve, with labels for the visible black and white points that correspond to 0 and 255 in normal 8-bit pixel values.

One strange feature in this graph is that black is mapped to code value 95 instead of 0. Not only does the Cineon file store whiter-than-white (overbright) values, it also has some blacker-than-black information. This is mirrored in the film lab when a negative is printed brighter than usual and the blacker-than-black information can reveal itself. Likewise, negatives can be printed darker and take advantage of overbright detail. The standard value mapped to monitor white is 685, and everything above is considered overbright.

Close-Up: All About Log

Close-Up: All About Log

You may first have heard of logarithmic curves in high school physics class, if you ever learned about the decay of radioactive isotopes.

If a radioactive material has a half-life of one year, half of it will have decayed after that time. The next year, half of what remains will decay, leaving a quarter, and so on. To calculate how much time has elapsed based on how much material remains, a logarithmic function is used.

Light, another type of radiation, has a similar effect on film. At the molecular level, light causes silver halide crystals to react. If film exposed for some short period of time causes half the crystals to react, repeating the exposure will cause half of the remaining to react, and so on. This is how film gets its response curve and the ability to capture even very bright light sources. No amount of exposure can be expected to affect every single crystal.

Although the Kodak formulas are commonly used to transform log images for compositing, other methods have emerged. The idea of having light values below 0.0 is dubious at best, and many take issue with the idea that a single curve can describe all film stocks, cameras, and shooting environments. As a different approach, some visual effects facilities take care to photograph well-defined photographic charts and use the resultant film to build custom curves that differ subtly from the standard Kodak one.

As much as Cineon log is a great way to encode light captured by film, it should not be used for compositing or other image transformations. This point is so important that it just has to be emphasized again:

Encoding color spaces are not compositing color spaces.

To illustrate this point, imagine you had a black pixel with Cineon value 95 next to an extremely bright pixel with Cineon’s highest code value, 1023. If these two pixels were blended together (say, if the image was being blurred), the result would be 559, which is somewhere around middle gray (0.37 to be precise). But when you consider that the extremely bright pixel has a relative brightness of 13.5, that black pixel should only have been able to bring it down to 6.75, which is still overbright white! Log space’s extra emphasis on darker values causes standard image processing operations to give them extra weight, leading to an overall unpleasant and inaccurate darkening of the image. So, final warning: If you’re working with a log source, don’t do image processing in log space!

Video Gamma Space

Because log space certainly doesn’t look natural, it probably comes as no surprise that it is a bad color space to work in. But there is another encoding color space that you have been intimately familiar with for your entire computer-using life and no doubt have worked in directly: the video space of your monitor.

Notes

Notes

The description of gamma in video is oversimplified here somewhat because the subject is complex enough for a book of its own. An excellent one is Charles Poynton’s Digital Video and HDTV Algorithms and Interfaces (Morgan Kaufmann).

You may have always assumed that 8-bit monitor code value 128, halfway between black and white, makes a gray that is half as bright as white. If so, you may be shocked to hear that this is not the case. In fact, 128 is much darker—not even a quarter of white’s brightness on most monitors.

A system where half the input gives you half the output is described as linear, but monitors (like many things in the real world) are nonlinear. When a system is nonlinear, you can usually describe its behavior using the gamma function, shown in Figure 11.16 and the equation

Output = inputgamma 0 <= input <= 1
Graph of monitor gamma (2.2) with file gamma (0.4545) and linear (1.0). These are the color curves in question, with 0.4545 and 2.2 each acting as the direct inverse of the other.

Figure 11.16. Graph of monitor gamma (2.2) with file gamma (0.4545) and linear (1.0). These are the color curves in question, with 0.4545 and 2.2 each acting as the direct inverse of the other.

In this function, the darkest and brightest values (0.0 and 1.0) are always fixed, and the gamma value determines how the transition between them behaves. Successive applications of gamma can be concatenated by multiplying them together. Applying gamma and then 1/gamma has the net result of doing nothing. Gamma 1.0 is linear.

Mac monitors have traditionally had a gamma of 1.8, while the gamma value for PCs is 2.2. Because the electronics in your screen are slow to react from lower levels of input voltage, a 1.0 gamma is simply too dark in either case; boosting this value compensates correctly.

The reason digital images do not appear dark, however, is that they have all been created with the inverse gamma function baked in to prebrighten pixels before they are displayed (Figure 11.17). Yes, all of them.

The gamma settings in the file and monitor complement one another to result in faithful image reproduction.

Figure 11.17. The gamma settings in the file and monitor complement one another to result in faithful image reproduction.

Close-Up: Gamma-rama

Close-Up: Gamma-rama

In case all this gamma talk hasn’t already blown your mind, allow me to clarify how monitor gamma and human vision work together. The question often comes up—why is middle gray 18% and not 50%? And why does 50% gray look like middle gray on my monitor, but not on a linear color chart?

It turns out that your eyes also have a nonlinear response to color—your vision brightens low light, which helps you to see where it’s dim, a survival advantage. The human eye is very sensitive to small amounts of light, and it gets less sensitive as brightness increases. Your eye effectively brightens the levels, and objects in the world are, in fact darker than they appear—or, they become darker when we represent their true linear nature.

The fact that 18% gray (or somewhere around that level; there is some disagreement about the exact number) looks like middle gray to your eye tells us that the eye does its own gamma correction of roughly 0.36: 50% × 0.36 = 18%.

Because encoding spaces are not compositing spaces, working directly with images that appear on your monitor can pose problems. Similar to log encoding, video gamma encoding allocates more values to dark pixels, so they weigh more than they should. Video color space is not much more valid than Cineon color space for recreating the way light behaves in the world at large.

Linear Floating Point HDR

In the real world, light behaves linearly. Turn on two lightbulbs of equivalent wattage where you previously had one and the entire scene becomes exactly twice as bright.

A linear color space lets you simulate this effect simply by doubling pixel values. Because this re-creates the color space of the original scene, linear pixels are often referred to as scene-referred values, and doubling them in this manner can easily send values beyond monitor range.

The Exposure effect in After Effects converts the image to which it is applied to linear color before doing its work unless you specifically tell it not to do so by checking Bypass Linear Light Conversion. It internally applies a .4545 gamma correction to the image (1 divided by 2.2, inverting standard monitor gamma) before adjusting.

A common misconception is that if you work solely in the domain of video you have no need for floating point. But just because your input and output are restricted to the 0.0 to 1.0 range doesn’t mean that overbright values above 1.0 won’t figure into the images you create. The 11_sunrise.aep project included on your disc shows how they can add to your scene even when created on the fly.

Notes

Notes

To follow this discussion, choose Decimal in the Info panel menu (this is the default for 32 bpc). 0.0 to 1.0 values are those falling in Low Dynamic Range, or LDR—those values typically described in 8 bit as 0 to 255. Any values outside this range are HDR, 32 bpc only.

The examples in Table 11.1 show the difference between making adjustments to digital camera photos in their native video space and performing those same operations in linear space. In all cases, an unaltered photograph featuring the equivalent in-camera effect is shown for comparison.

Table 11.1. Comparison of Adjustments in Native Video Space and in Linear Space

 

BRIGHTEN ONE STOP

LENS DEFOCUS

MOTION BLUR

Original Image

Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space

Filtered in Video Space

Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space

Filtered in Linear Space

Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space

Real-World Photo

Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space
Comparison of Adjustments in Native Video Space and in Linear Space

The table’s first column contains the images brightened by one stop, an increment on a camera’s aperture, which controls how much light is allowed through the lens. Widening the aperture by one stop allows twice as much light to enter. An increase of three stops brightens the image by a factor of eight (2 × 2 × 2, or 23).

To double pixel values in video space is to quickly blow out bright areas in the image. Video pixels are already encoded with extra brightness and can’t take much more.

The curtain and computer screen lose detail in video space that is retained in linear space. The linear image is nearly indistinguishable from the actual photo for which camera exposure time was doubled (another practical way to brighten by one stop).

The second column simulates an out-of-focus scene using Fast Blur. You may be surprised to see an overall darkening with bright highlights fading into the background—at least in video space. In linear, the highlights pop much better. See how the little man in the Walk sign stays bright in linear but almost fades away in video because of the extra emphasis given to dark pixels in video space. Squint your eyes and you notice that only the video image darkens overall. Because a defocused lens doesn’t cause any less light to enter it, regular 8 bpc blur does not behave like a true defocus.

The table’s third column uses After Effects’ built-in motion blur to simulate the streaking caused by quick panning as the photo was taken. Pay particular attention to the highlight on the lamp; notice how it leaves a long, bright streak in the linear and in-camera examples. Artificial dulling of highlights is the most obvious giveaway of nonlinear image processing.

Artists have dealt with the problems of working directly in video space for years without even knowing we’re compensating all the time. A perfect example is the Screen transfer mode, which is additive in nature but whose calculations are clearly convoluted when compared with the pure Add transfer mode. Screen uses a multiply-toward-white function with the advantage of avoiding the clipping associated with Add. But Add’s reputation comes from its application in bright video-space images. Screen was invented only to help people be productive when working in video space, without overbrights; Screen darkens overbrights (Figure 11.18). Real light doesn’t Screen, it Adds. Add is the new Screen, Multiply is the new Hard Light, and many other blending modes fall away completely in linear floating point.

Watch those highlights. Adding in video space blows out (left), but Screen in video looks better (center). Adding in linear is best (right).

Figure 11.18. Watch those highlights. Adding in video space blows out (left), but Screen in video looks better (center). Adding in linear is best (right).

HDR Source and Linearized Working Space

Should you in fact be fortunate enough to have high-bit-depth source images with over-range values, there are indisputable benefits to working in 32-bit linear, even if your final output uses a plain old video format that cannot accommodate these values.

Tip

Tip

Linear blending is available without 32 bpc HDR; in Project Settings, choose Blend Colors using 1.0 Gamma. This feature is described in detail near the end of the chapter.

In Figure 11.19, the lights are severely clipped by video space, which is not a problem so long as the image is only displayed; all of the images look fine printed on this page or displayed on your monitor. Add motion blur, however, and you see the problem at its most exaggerated; the points of light should not lose their intensity simply by being put into motion.

An HDR image is blurred without floating point (left) and with floating point (center), before being shown as low dynamic range (right).

Figure 11.19. An HDR image is blurred without floating point (left) and with floating point (center), before being shown as low dynamic range (right).

The benefits of floating point aren’t restricted to blurs, however; they just happen to be an easy place to see the difference. Every operation in a compositing pipeline gains extra realism from the presence of floating point pixels and linear blending.

Close-Up: Terminology

Close-Up: Terminology

Linear floating point HDR compositing uses radiometrically linear, or scene-referred, color data. For the purposes of this discussion, this is perhaps best called “linear light compositing,” or “linear floating point,” or just simply, “linear.” The alternative mode to which you are accustomed is “gamma-encoded,” or “monitor color space,” or simply, “video.”

Figure 11.20 features an HDR image on which a simple composite is performed, once in video space and once using linear floating point. In the floating point version, the dark translucent layer acts like sunglasses on the bright window, revealing extra detail exactly as a filter on a camera lens would. The soft edges of a motion-blurred object also behave realistically as bright highlights push through. Without floating point there is no extra information to reveal, so the window looks clipped and dull and motion blur doesn’t interact with the scene properly.

A source image with over-range values in the highlights (top) is composited without floating point (bottom left) and with floating point (bottom right).

Figure 11.20. A source image with over-range values in the highlights (top) is composited without floating point (bottom left) and with floating point (bottom right).

32 Bits per Channel

Although it is not necessary to use HDR source to take advantage of an HDR pipeline, it offers a clear glimpse of this brave new world. Open 11_treeHDR_lin.aep; it contains a comp made up of a single image in 32 bit EXR format (used to create Figure 11.19). With the Info panel clearly visible, move your cursor around the frame.

Notes

Notes

Included on the disc are two similar images, sanityCheck.exr and sanityCheck.tif. The 32 bpc EXR file is linearized, but the 8 bpc TIFF file is not. Two corresponding projects are also included, one using no color profile, the other employing a linear profile. These should help illustrate the different appearances of a linear and a gamma-encoded image.

As your cursor crosses highlights—the lights on the tree, specular highlights on the wall and chair, and most especially, in the window—the values are seen to be well above 1.0, the maximum value you will ever see doing the same in 8 bpc or 16 bpc mode. Remember that you can quickly toggle between color spaces by Alt/Option-clicking the project color depth identifier at the bottom of the Project panel.

Any experienced digital artist would assume that there is no detail in that window—it is blown out to solid white forevermore in LDR. However, you may have noticed an extra icon and accompanying numerical value that appears at the bottom of the composition panel in a 32 bpc project (Figure 11.21). This is the Exposure control; its icon looks like a camera aperture and it performs an analogous function—controlling the exposure (total amount of light) of a scene the way you would stop a camera up or down (by adjusting its aperture).

Exposure is an HDR preview control that appears in the Composition panel in 32 bpc mode.

Figure 11.21. Exposure is an HDR preview control that appears in the Composition panel in 32 bpc mode.

Drag to the left on the numerical text and something amazing happens. Not only does the lighting in the scene decrease naturally, as if the light itself were being brought down, but at somewhere around −10.0, a gentle blue gradient appears in the window (Figure 11.22, left).

At −10 Exposure, the room is dark other than the tree lights and detail becomes visible out the window. At +3, the effect is exactly that of a camera that was open 3 stops brighter than the unadjusted image.

Figure 11.22. At −10 Exposure, the room is dark other than the tree lights and detail becomes visible out the window. At +3, the effect is exactly that of a camera that was open 3 stops brighter than the unadjusted image.

Drag the other direction, into positive Exposure range, and the scene begins to look like an overexposed photo; the light proportions remain and the highlights bloom outward (Figure 11.22, right).

Notes

Notes

Keep in mind that for each 1.0 adjustment upward or downward of Exposure, you double (or halve) the light levels in the scene. Echoing the earlier discussion, a +3.0 Exposure setting sets the light levels 8× (or 23) brighter.

The Exposure control in the Composition panel is a preview-only control (there is an effect by the same name that renders); scan with your cursor, and Info panel values do not vary according to its setting. This control offers a quick way to check what is happening in the out-of-range areas of a composition. With a linear light image, each integer increment represents the equivalent of one photographic stop, or a doubling (or halving) of linear light value.

Close-Up: Floating Point Files

Close-Up: Floating Point Files

As you’ve already seen, there is one class of files that does not need to be converted to linear space: floating point files. These files are already storing scene-referred values, complete with overbright information. Common formats supported by After Effects are Radiance (.hdr) and floating point .tiff, not to mention .psd, but the most universal and versatile is Industrial Light + Magic’s OpenEXR format. OpenEXR uses efficient 16-bit floating point pixels, can store any number of image channels, supports lossless compression, and is already supported by most 3D programs thanks to being an open source format.

After Effects CS4 offers expanded support of the OpenEXR format by bundling plug-ins from fnord software, which provide access to multiple layers and channels within these files. EXtractoR can open any floating point image channel in an EXR file, and Identifier can open the other, non-image channels such as Object and Material ID.

Mixed Bit Depths and Compander

Most effects don’t, alas, support 32 bpc, although there are dozens that do. Apply a 16 bpc or (shudder) 8 bpc effect, however, and the overbrights in your 32 bpc project disappear—all clipped to 1.0. Any effect will reduce the image being piped through it to its own color space limitations. A small warning sign appears next to the effect to remind you that it does not support the current bit depth. You may even see a warning explaining the dangers of applying this effect.

Of course, this doesn’t mean you need to avoid these effects to work in 32 bpc. It may mean you have to cheat, and After Effects includes a preset allowing you to do just that: Compress-Expand Dynamic Range (contained in Effects & Presets > Animation Presets > Image – Utilities. Make certain Show Animation Presets is checked in the panel menu).

This preset actually consists of two instances of the HDR Compander effect, which was specifically designed to bring floating point values back into LDR range. The first instance is automatically renamed Compress, and the second, Expand, which is how the corresponding modes are set. You set the Gain of Compress to whatever is the brightest overbright value you wish to preserve, up to 100. The values are then compressed into LDR range, allowing you to apply your LDR effect. The Gain (as well as Gamma) of Expand is linked via an expression to Compress so that the values round-trip back to HDR (Figure 11.23).

Effects & Presets > Animation Presets > Presets > Image - Utilities includes the Compress-Expand Dynamic Range preset, also known as the Compander (Compressor/Expander). It consists of two instances of the HDR Compander effect, one named Compress and the other, Expand, wired with an expression to undo whatever Compress does. The idea is to place your 8 or 16 bpc effect (such as Match Grain, shown) between the two of them. This prevents clipping of over-range values. If banding appears as a result of Compress-Expand, Gamma can be adjusted to weight the compressed image more toward the region of the image (probably the shadows) where the banding occurs. You are sacrificing image fidelity in order to preserve a compressed version of the HDR pipeline.

Figure 11.23. Effects & Presets > Animation Presets > Presets > Image - Utilities includes the Compress-Expand Dynamic Range preset, also known as the Compander (Compressor/Expander). It consists of two instances of the HDR Compander effect, one named Compress and the other, Expand, wired with an expression to undo whatever Compress does. The idea is to place your 8 or 16 bpc effect (such as Match Grain, shown) between the two of them. This prevents clipping of over-range values. If banding appears as a result of Compress-Expand, Gamma can be adjusted to weight the compressed image more toward the region of the image (probably the shadows) where the banding occurs. You are sacrificing image fidelity in order to preserve a compressed version of the HDR pipeline.

Tip

Tip

8 and 16 bpc effects clip only the image that they process. Apply one to a layer, you clip that layer only. Apply one to an Adjustment Layer and you clip everything below. Add an LDR comp or layer to an otherwise HDR comp in a 32 bpc project, it remains otherwise HDR.

Additionally, there are smart ways to set up a project to ensure that Compander plays the minimal possible role. As much as possible, group all of your LDR effects together, and keep them away from the layers that use blending modes where float values are most essential. For example, apply an LDR effect via a separate adjustment layer instead of directly on a layer with a blending mode. Also, if possible, apply the LDR effects first, then boost the result into HDR range to apply any additional 32 bpc effects and blending modes.

Blend Colors Using 1.0 Gamma

After Effects CS4 contains a fantastic option to linearize image data only when performing blending operations: the Blend Colors Using 1.0 Gamma toggle in Project Settings. This allows you to take advantage of linear blending, which makes Add and Multiply blending modes actually work properly, even in 8 bpc or 16 bpc modes.

The difference is quite simple. A linearized working space does all image processing in gamma 1.0, as follows

footage --> to linear PWS ->
   Layer ->
      Mask -> Effects -> Transform ->
         Blend With Comp ->
   Comp -> from linear PWS to OM space ->
output

whereas linearized blending performs only the blending step, where the image is combined with the composition, in gamma 1.0

footage --> to PWS ->
   Layer ->
      Mask -> Effects -> Transform -> to linear PWS ->
         Blend With Comp -> to PWS ->
   Comp -> from PWS to OM space ->
output

(Special thanks to Dan Wilk at Adobe for detailing this out.)

Because effects aren’t added in linear color, blurs no longer interact correctly with overbrights (although they do composite more nicely), and you don’t get the subtle benefits to Transform operations; After Effects’ much maligned scaling operations are much improved in linear floating point. Also, 3D lights behave more like actual lights in a fully linearized working space.

I prefer the linear blending option when in lower bit depths and there is no need to manage over-range values; it gives me the huge benefit of more elegant composites and blending modes without forcing me to think about managing effects in linear color. Certain key effects, in particular Exposure, helpfully operate in linear gamma mode.

Output

Finally, what good is it working in linear floating point if the output bears no resemblance to what you see in the composition viewer? Just because you work in 32-bit floating point color does not mean you have to render your images that way.

Keeping in mind that each working space can be linear or not, if you work in a linearized color space and then render to a format that is typically gamma encoded (as most are), the gamma-encoded version of the working space will also be used. After Effects spells this out for you explicitly in the Description section of the Color Management tab.

To this day, the standard method to pass around footage with over-range values, particularly if it is being sent for film-out, is to use 10-bit log-encoded Cineon/DPX. This is also converted for you from 32 bpc linear, but be sure to choose the Working Space as the output profile and that in Cineon Settings, you use the Standard preset.

The great thing about Cineon/DPX with a standard 10-bit profile is that it is a universal standard. Facilities around the world know what to do with it even if they’ve never encountered a file with an embedded color profile. As was detailed earlier in the chapter, it is capable of taking full advantage of the dynamic range of film, which is to this day the most dynamic display medium widely available.

Conclusion

This chapter concludes Section II, which focused on the most fundamental techniques of effects compositing. In the next and final section, you’ll apply those techniques. You’ll also learn about the importance of observation, as well as some specialized tips and tricks for specific effects compositing situations that re-create particular environments, settings, conditions, and natural phenomena.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset