fig3_41_1

When it comes time to light in a 3D program, it can be useful to follow organized steps to achieve aesthetic lighting in an efficient manner. The steps include determining what lights are needed and what types of shadows are necessary. It’s also useful to be familiar with common 3D shaders and how their basic properties function.

 This chapter includes the following critical information:

Suggested 3D lighting workflow
Common 3D light types, shadow types, and properties
Common shaders and properties

Photo copyright Rungaroon Taweeapiradeemunkohg / 123RF Stock Photo.

3D Lighting Pipelines and Workflow

Although lighting in a 3D program shares many similarities to lighting for the fine arts, stage, film, video, and photography, it requires its own unique workflow. When the workflow is handled by a team of animators, it’s often referred to as a pipeline. A pipeline is a standardized system of producing complex animations using teams of artists and an array of equipment and software.

For example, a pipeline created for an animated feature or extensive visual effects project generally follows these production pipeline steps:

1. Concept art: Characters, props, environments, and color guides are designed.

2. Storyboarding: The story is broken into specific shots with specific camera placements using 2D drawings or simplified 3D animations. If the 2D drawings are animated and edited into a video, the result is referred to as an animatic.

3. 3D modeling: Characters, props, and environments are constructed in 3D.

4. Texturing: Models are textured.

5. Rigging: Models that require animation are rigged so they can be moved or deformed.

6. Layout: The storyboards are translated to 3D set-ups using the 3D models.

7. Animation: Characters, props, and effects (such as fire and water) are animated.

8. Lighting: Animated shots are lit and rendered.

9. Compositing: If required, shots are composited in 2D to combine and/or fine tune the renders.

Depending on the studio and the scope of the production, some of these steps may overlap or happen simultaneously. In addition, all of these steps require testing and revisions. Compositing may be the domain of the lighter or it may be handled by a separate compositing department.

If an animation is small in scope, one animator may handle multiple tasks. For example, on an independent animation or a commercial production with a limited number of shots, one animator may model, texture, rig, animate, light, render, and composite everything needed for one shot.

Whether or not an animation is part of a large or small project, it pays to follow certain steps when lighting. These include the collection of light information, an organized approach to placing and adjusting 3D lights, and an efficient method of test rendering. To help facilitate lighting, it helps to understand the difference between light and shadow types and basic shader functionality.

Collecting Light Information

The first step to 3D lighting is to determine how many lights you need, where the lights should be located, and what properties the lights should possess. To derive this information, ask yourself these questions:

What is the context of the lighting? Are you lighting a single, standalone shot or are you lighting one shot of a multi-shot scene? Are you lighting all the shots in the scene or do you need to match your work to other lighters who are undertaking surrounding shots? The answer to these questions determines what reference is required and if any precedents have been set for the lighting you are working on. For example, on a feature animation, you may need to match shots lit by other lighters and follow general guidelines set by the art department through concept art and color guides.

Are there any special considerations? Is there an impetus to light stylistically? Is there a need to light in a special way to communicate story information or establish a mood? These considerations may affect the answers of additional questions.

What is the location of the lighting? Is the location on Earth? If so, the lighting should follow basic qualities of light in Earth’s atmosphere. If not, light may react differently. For example, if you are lighting a scene in space, the lack of air molecules or particulate matter, such as water vapor, prevents a light beam from forming through light scatter (Figure 3.1). This is not to say that lighting should always be scientifically accurate, but lighting generally needs to be perceived as appropriate for a particular location. If the plan is to light stylistically, the precision of the light sources may not be critical. For example, numerous science-fiction films show laser beams and terrestrial explosions occurring in space. Nevertheless, stylistic lighting has its own aesthetic concerns, which are discussed later in this book.

fig3_1
Figure 3.1 The red light of a laser illuminates a lab. The beam itself is visible due to particulate matter in the air, which would otherwise be missing in a vacuum.

Photo copyright lightpoet / 123RF Stock Photo.

What is the location more specifically? Is the location inside? For example, the location may be in a bedroom, a car, or a cavern. Conversely, is the location outside? The specific location affects what lights are available and expected to be present by the viewer.

Does the location exist? Does your lighting need to replicate a real location? As such, do you possess reference in the form of photos or video? If your location does not exist, does it need to match a similar location?

What is the time of day? Is it sunrise, noon, afternoon, sunset, evening, or nighttime? If you combine this information with the location, you can determine what light sources would be generally available at the location in the real world. For example, if the location is a city street, the main source of light during the day is the sun (Figure 3.2). At night, the source of light may be the moon, street lamps, car headlights, electric signs, and so on.

fig3_2
Figure 3.2 One location with two times of day, necessitating different light sources. Note that the nighttime lighting includes lens flares (light scattering through the optical components of the camera lens) and light trails (created by long exposures of moving car headlights and tail lights).

Image sequence adapted from “10th St, Day to Night Time Lapse” by Jeremy Seitz licensed under Creative Commons Attribution 2.0 Generic (CC BY 2.0).

What is the time period? Does the lighting scenario take place in the present? If not, what is the historical time period in which the lighting occurs? The year in which the scene takes place affects the list of available lights. For example, a pre-20th century setting prevents the existence of incandescent light bulbs and requires more extensive use of fire-based lights, such as candles, oil lamps, torches, and so on. Because each type of light has distinct properties, this affects the way in which you light within the 3D program. If the time period has no historical basis or is derived from an imaginary alternative history, you can mix and match light types. For example, you may insert artificial light sources into a location where they would otherwise be historically unavailable. By the same token, you can invent light sources if historical accuracy is not needed. For example, light may arrive from the magic staff of a wizard.

What are the properties of the light sources? After you’ve determined what light sources exist, you can define the lights’ specific properties. For example, the lights may be natural or artificial. Natural light may include the sun, sunlight reflected off the moon, a fire, or a candle. Artificial lights come in many forms, such as incandescent light bulbs, fluorescent light fixtures, neon signs, LED light arrays, and amplified bulbs contained in flashlights or headlamps. Each of these light types possess a real-world color (courtesy of a specific wavelength), intensity (brightness), focus (parallel, oblique, or random light rays), and shadow type (hard-edged, soft-edged, or hard-to-soft over distance).

Lighting Determination Examples

To practice the technique of determining light sources, you can study existing photographs, film and video stills, or paintings. For example, Figure 3.3 represents an example of simple, real-world 2-point lighting. Questions and answers are included.

fig3_3
Figure 3.3 A man in an office is lit with 2-point lighting. Letters indicate locations of different light sources.

Photo copyright ammentorp / 123RF Stock Photo.

What is the location? An office.

What is the location more specifically? The center of a window-lit room.

Does the location exist? Yes.

What is the time of day? Daytime. The time may be mid-morning or mid-afternoon based on the longish shadows cast by the people in the background. A time closer to noon would create shorter shadows. A time closer to sunrise or sunset would create longer shadows with dimmer, more saturated light.

What is the time period? We can assume this is present day as there are no obvious anachronisms.

What are the properties of the light sources? The light sources are as follows:

A) The sun as a key light. This light arrives from the bank or large windows at the left of frame. The sunlight arrives at a roughly 45 degree angle, creating longish shadows. The sunlight is also angled toward the man so that the light reaches the front of his face, forming split lighting. The light, by the time it reaches the man, is a soft light that does not cast a strong shadow on the man himself. The softness may be a result of the sunlight scattering through curtains or multiple, semi-obscured windows.

B) Fill light. The weaker, secondary light source arrives from the lower-right side of the frame. This is sunlight that has bounced off the floor and surrounding walls. There are no other visible light sources, so we can safely assume that the bounced sunlight is the sole secondary source.

As a second example, we’ll look at a complex, fantastic, multi-point lighting (Figure 3.4).

fig3_4
Figure 3.4 Detail of Antoniusaltar, Triptychon, Mitteltafel: Versuchung des Hl. Antonius, 1505–06, by Hieronymus Bosch. A fantastic scene nevertheless gives clues to lighting. Letters indicate locations of different light sources.

Public domain photos provided by The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. ISBN 3936122202. Distributed by DIRECTMEDIA Publishing GmbH.

What is the location? Outdoors, near the outskirts of a city.

What is the location more specifically? Near the ruins of a church.

Does the location exist and what is the time period? Possibly. This may represent the Flemish homeland of the painter as it may have typically looked in the late Medieval time period (minus the monsters and conflagrations).

What is the time of day? Daytime, as indicated by the sky at the upper-right of the painting. However, the raging fire in the background has created so much smoke that part of the landscape is thrust into virtual night.

What are the properties of the light sources? The light sources are as follows:

A) The sun as a key light. A broad, generic light arrives from the center-left of the painting. This is indicated by shadows underneath and around the central characters as well as the general shading of the humans and monsters (where the upper-left side of many characters receive the most light).

B) Fill light. The weaker, secondary light source arrives from the lower-right side of the frame. Lacking other identifiable source, this is sunlight that has bounced off the ground and other nearby surfaces. You can see this light on the characters that are painted with greater contrast.

C) Background fire as a utility light. Although the fire does not affect the lighting of the foreground, it illuminates (and silhouettes) background buildings.

D) A light beam as a utility light. The narrow beam appears in the interior of a church. Although this could be the focused light of the sun, it seems out of place in this scene and is therefore stylistic.

E) Interior lights as utility lights. The open doors and windows of the background buildings appear illuminated from within. Keeping the time period in mind, you can assume the light is generated by fireplace fires, torches, or oil lamps. Alternatively, the raging fire that plagues the city may have reached the building interiors.

3D Lighting Steps

After you’ve determined how many lights you need, where the lights should be located, and what basic properties the lights possess, you can add them to your 3D scene. Regardless of which 3D program you are using and the exact method by which to create and manipulate the 3D lights, I suggest following these basic steps:

1.Add and position the key light. The key should serve as the most intense light source. Adjust the light intensity so that it does not overexpose the subject (left side of Figure 3.5). Overexposure will cause detail loss on the surface and may cause areas to appear pure white. Activate shadows for the light and adjust the show quality to match the type of light source. Adjust the light position to make the shadows fall in an aesthetic manner (right side of Figure 3.6). If a scene requires more than one key light, add and adjust these. Multiple key lights may be necessary if there are several intense lights in a scene, such as overhead light fixtures that are equally strong and distinctly separate.
fig3_5
Figure 3.5 Left: A key light is positioned to illuminate a model. Right: Shadows are activated for the key light. The addition of shadows often requires the repositioning of the shadow-producing light in order to achieve an aesthetic result. With this example, the end result is loop lighting.
2.Add and position the fill light. A fill light may or may not require its own shadows (left side of Figure 3.6). For example, if the fill light emulates a light that bounces off a wall, a shadow may be so soft that it is not necessary to activate it. If the fill light emulates the bulb of a shaded table lamp, a distinct shadow is appropriate. If a scene requires multiple fill lights, add and adjust these. Note that the addition of each new light may require the adjustment of the previous lights. You may need to readjust all the light intensities so you don’t run into overexposure. To judge the contribution of a new light, consider toggling the light’s on/off property. Alternatively, you can assign lights to different 3D layers (if available) and toggle the layer visibility on and off.
fig3_6
Figure 3.6 Left: A fill light is added so that it arrives from the lower-left side of frame at an intensity 1/4 that of the key light, creating a 4:1 lighting ratio. Right: A spot light is added to strike the background to better define the edges of the model. This serves as a variation of 3-point lighting with key, fill, and background lights.
3.Add and adjust any utility lights. Add these lights to improve the overall aesthetic quality or to emulate specific light sources that are not as intense as the key or fill lights. For example, you might add a rim, kicker, hair, or background light to separate the face of a character from the background (right side of Figure 3.6). Alternatively, you might add lights to emulate a non-key, non-fill real-world light source, such as a flickering candle on a background shelf or a waning moon in a background sky.

Keep in mind that this is a general guideline and may not work in every lighting situation. Here are a few caveats:

Lighting a model that is untextured may lead to different results when compared to lighting a model that is textured. An untextured model is one that remains assigned to a default shader. A textured model is one that has been assigned to a custom shader that has been mapped with texture bitmaps or procedural textures. Lighting an untextured model may necessitate lighting changes when the model is finally textured. This is due to light intensities appearing different on surfaces that possess different colors, different levels of detail, and different shader qualities such as specularity or reflectivity. For example, the model seen in Figure 3.5 is assigned to a gray-colored default Lambert material. If the model was textured so that it possesses a skin-like quality, the light intensity values would require adjustment. For more information on shaders and the process of texturing, see the remaining sections of this chapter.
The lights and shadows you are able to use are dependent on the 3D renderer you choose. Thus, as you go through this lighting process, you must consider which renderer and which renderer settings are the most appropriate for your lighting task. We will discuss this issue further in Chapter 5.
Occasionally, it may be prudent to start lighting with fill lights or utility lights. For example, if you are lighting a character in the center of a large environment, it may be wise to light the environment first; this will help establish the overall look of the scene, which will help determine what light reaches the character.

Testing 3D Lighting

An important part of the 3D animation process is rendering. When it comes to 3D lighting, you must render to create a final version of the frame or frames. You also have to render to test and adjust your lighting set-up. Although each 3D program offers its own set of features for rendering, here are a few things to keep in mind as you work:

Light with the final composition. Light your scene based on the view of the rendering camera. In other words, don’t light while looking at the view of an arbitrary camera or a camera that has not been positioned with a final composition in mind. If you fail to follow this rule, you may waste time concentrating on surfaces or shadows that are otherwise hidden or out of frame.
If you are lighting an animation with motion, test the lighting on different frames. You may need to adjust the lights based on the motion. For example, if a character is moving about rapidly, you may need to reposition the lights to avoid unattractive shadows or add additional lights to ensure the character remains lit despite his or her movement across the frame.
If possible, light with final shaders and textures. As discussed in the previous section, it may be difficult to gauge the success of a lighting set-up if the models in the scene have not been textured. Ultimately, 3D lights are dependent on the surface qualities provided by shaders. Shaders are discussed in more detail throughout the remainder of this chapter.
Use lighting shortcuts where available. 3D programs often provide various tools and functions to speed up the lighting process. For example, the 3D viewports may support shaded views with approximations of the lighting and shadowing. Although the quality may not be as high as a final render, it can help you place lights more quickly. When it comes time to render a frame, either through a 3D viewport or a dedicated render window, you often have the option to render small regions. Regions allow you to concentrate on areas that are affected by your lighting adjustments without having to calculate the lighting and shadowing for the entire frame. Some render windows or frame buffers support IPR (Interactive Photorealistic Rendering), where the render automatically updates as you adjust lights, shaders, or textures.
Use lower render quality settings when you start. If you are roughing in the positions, intensities, and shadows of your primary lights, consider using a lower resolution or lower render quality to render test frames. This will help speed up the lighting process. As you approach a final lighting setup, you can return the resolution and quality to the requirements of the final output. If you are using a 2D compositing program with a 3D environment, such as Adobe After Effects, you can use a lower proxy resolution in the viewport to speed up the lighting calculations.
Consider fine-tuning the shader properties yourself. If you are working on your own 3D project, it may be useful to adjust the shader properties along with the light qualities to create the best possible lighting result. If you are working with a team at a studio, this may not be possible due to the shared animation pipeline. Nevertheless, it always pays to be familiar with the ways that shaders work in the presence of lights.

SIDEBAR
Working with Color Calibration

In the realm of digital imaging, color calibration is an important consideration. Digital imaging includes digital video, photography, art, graphic design, and 3D animation. All of these digital art forms are reliant on an RGB color model that combines different intensities of red, green, and blue to represent the full range of colors. When a digital image is created, it’s created within a color space, which is a range of colors the image can potentially store. When the image is displayed, however, it may encounter a device that utilizes a different color space. To complicate matters, different devices use different color spaces, which may lead to the image looking different on each device (devices will have a limited color range for one or more of the color channels within the color model). For example, a rendered 3D animation may look one way on a computer monitor and another way on a broadcast television. Color calibration attempts to neutralize this problem by adjusting the various devices to make the image appear consistent.

Color calibration can occur on the device level. For example, you can color calibrate your computer monitor using the operating system or specialized calibration software. It’s also possible to apply color calibration within the software creating the digital images. For example, some 3D programs allow you to activate color calibration so that your work is suitable for output to a particular device (such as a television screen or theater projector). There are generally two different ways to apply calibration in a 3D program:

View Transform Allows you to activate a color space transform in a render viewport, window, or buffer. Activating a view transform does not affect the render’s inherent RGB values. Instead the transform temporarily converts the native color space to a different color space. For example, most computer systems operate within a sRGB color space. The view transform can convert sRGB to Rec. 709, which is the color space of HDTV (High-Definition Television). Note that some view transforms include a gamma adjustment. Gamma, in the realm of digital imaging, is a power function that adjusts displayed images so that they appear correct for human vision. Gamma-adjusted images appear to have greater contrast and a greater range of values.

Renderer Transform As opposed to temporarily applying a view transform to the render viewport, window, or buffer, you can choose to apply a color space transform at the point of render so that the rendered images are created within the new color space. This option is suitable if you are rendering for a particular set of devices, such as theater projectors.

When discussing color calibration, it’s also important to consider linear color space. Linear color space is a color space where gamma adjustment is not applied. Working in a linear color space allows for a more accurate representation of image values. Although not mandatory for successful lighting, linear color space may be required for accurate results when working with PBR (Physically Based Rendering) systems. Many 3D programs support a linear color space work environment. Switching to such a work environment generally affects the way in which texture bitmaps are interpreted and the way in which renders are displayed and exported. That said, the linear color space provided by a 3D program does not affect the color space used by the display device. Hence, using linear color space demands careful set-up and consideration. Using linear color space incorrectly may lead to inaccurately-lit renders that will require additional manipulation outside the 3D program.

Working with 3D Lights

Much like their real-world counterparts, you can position and aim 3D lights in a 3D program that supports lighting. In addition, you can adjust basic properties such as intensity, color, and shadow quality. Although each 3D program represents its 3D lights in a slightly different way (Figure 3.7), lights within these programs share many common traits, which are discussed in this chapter. Hence, you can apply this knowledge to a broad array of programs, including Autodesk Maya, Autodesk 3ds Max, Autodesk Softimage, MAXON Cinema 4d, and Blender. This also applies to 3D programs that are designed for specialty tasks such as sculptural modeling, effects simulation, industrial design, architectural visualization, and video game design. In addition, the common traits are carried over into 2D compositing programs that offer a 3D environment, such as The Foundry Nuke, Blackmagic Fusion, and Adobe After Effects.

Common 3D Light Types

3D programs generally include a set of common lights that include the following:

Spot light This type of light is named after the real-world counterpart used in film, video, and stage lighting. Its light emanates from a point in space but quickly diverges over distance. The light rays are oblique (neither parallel or perpendicular), creating a light cone. The cone indicates the outermost vectors of the light rays. At the cone edge, the light intensity drops to 0. When the light hits a surface, it forms a circular or oval spot of light (Figure 3.8).

fig3_7
Figure 3.7 Left: Light icons in Autodesk Maya. Right: Light icons in Blender (the area light gains a rectangular icon when scaled).
fig3_8
Figure 3.8 Left: A spot light illuminates a model. Yellow arrows indicate the direction of the light rays. The light’s cone angle is set to 20 (degrees). Middle: The cone angle is set to 60. Right: The cone angle is set to 5 while the penumbra angle is set to 20. The positive penumbra angle value adds a soft transition from the edge set by the cone angle in an outwards direction. Note that shadows have not been activated.

Model created by SMK National Gallery of Denmark.

You can adjust the cone width to increase or decrease the lit area. Additional spot light properties, such as penumbra angle, control the rapidity with which the light transitions from a non-0 to 0 intensity. For example, a large penumbra value causes the spot of the light to have a soft edge. A 0 penumbra value causes the spot to have a hard (non-soft) edge. The cone of a spot light is usually included as part of the 3D light icon (Figure 3.8). The position and rotation of a spot light icon affects the quality of the light. (Note that position is generally referred to as translation within 3D programs).

Point light The light of this type emanates from a point in space. The light is omni-directional; that is, the light rays fan out in all directions (Figure 3.9). You can use this type to emulate position-specific artificial light sources like light bulbs or position-specific natural light sources like candles. The translation of the point light icon affects the light quality but light’s rotation has no impact. Point lights are sometimes called omni lights. Note that some renderers treat point lights like emissive spheres, where they have a distinct size in XYZ space; this variation is more realistic and more accurately matches similar real-world light sources.

Directional light This type creates parallel rays of light as if arriving from an infinitely distant light source. You can use this light to emulate the sun or moon. The rotation of a directional light affects its light quality but its translation has no impact. The light icon indicates the direction the light rays are traveling through the use of arrows or directional lines (Figure 3.10). Directional lights are sometimes referred to as sun, infinite, or direct lights.

fig3_9
Figure 3.9 A point light illuminates a model. The light is placed in front of the model. Because no other lights are present, the back side of the model is dark. Note that shadows have not been activated.
fig3_10
Figure 3.10 A directional light illuminates a model. The light icon is drawn as a cluster of arrows with the arrow heads indicating the direction the light rays travel. The light reaches all points within the scene along a specific vector. Hence, the rectangular ground plane is completely lit. However, the side of the sculpture that is opposite the light direction remains unlit.

Ambient light This light type produces an intensity that is equal at all points in the scene (Figure 3.11). Hence, the light’s translation does not affect the light quality (in fact, some programs do not bother to create an ambient light icon). Ambient lights are sometimes used as weak fill lights. Ambient lights are sometimes called radial or flat lights. In general, ambient lights have technical limitations, such as the inability to cast shadows or interact with specific shader functions, such as bump mapping.

Area light This light type is defined by a rectangular shape and acts like an array of diverging directional lights or a cluster of point lights generating rays in a 180 degree sphere. Area lights don’t create rectangle spots or light pools due to the overlapping of light rays (Figure 3.12). That said, the larger the area light, the softer the edge of the light pool. You can use area lights to emulate broad light sources that are nevertheless confined by a shape. For example, you might use an area light to recreate the light arriving through a window, light arriving from a theater marquee, or light generated by a computer monitor. This is one of the few light types that is affected by scale changes. For example, you can use an area light to emulate a neon tube by making the light icon long and narrow. (In general, only X and Z scale changes alter the light.) Area lights are also referred to as light boxes. Note that some renderers support area lights with different shapes including discs and spheres.

fig3_11
Figure 3.11 An ambient light equally illuminates all parts of a model surface. Although useful as a fill, ambient lights are unable to function as a key due to their lack of directionality.
fig3_12
Figure 3.12 An area light creates a soft light pool. The icon “tail” indicates the light direction. The yellow arrows represent a small portion of the overlapping light rays generated by the light.

SIDEBAR
Using Light Decay

Thus far, descriptions of lights assume that there is no light decay. Light decay represents the rapidity with which the light transitions from maximum intensity to 0 intensity over distance. This is also known as light falloff or light attenuation. These terms are somewhat confusing as real-world light does not lose energy or disappear as it travels through a vacuum. Instead, light and other electromagnetic radiation diverges from its source and thus, over distance, causes there to be less radiation in any given point in space as the radiation is spread out over a greater and greater area.

If light decay is not activated for a 3D light, the light’s intensity fails to change over distance and is as strong at 10 units as it is at 1,000,000 units. If light decay is activated, there is generally a way to control the decay rapidity. For example, you might switch to a mathematical decay formula such as quadratic. Quadratic decay uses an inverse square law, which describes light and other electromagnetic radiation in the real-world whereby the radiation intensity is inversely proportional to the square of the distance from the radiation source (left side of Figure 3.13). The law uses the formula light intensity = 1/distance2. In contrast, a linear or constant decay may be considered stylized as its formula is intensity = 1/distance (right side of Figure 3.13). More aggressive decay may use a cubic formula where intensity = 1/distance3.

fig3_13
Figure 3.13 Left: Graphic representation of the inverse square law. Light rays diverge over a greater and greater area. Right: Three identical spot lights illuminate a plane. The top light has no decay, keeping light intensity consistent over distance. The middle and bottom lights lose intensity over distance due to the application of light decay formulas.

Left graphic by Borb licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0).

Common 3D Light Properties

Although the properties that control 3D lights in various 3D programs may have different names, their basic functionality remains the same (Figure 3.14).

fig3_14
Figure 3.14 Clockwise, from top left: A partial set of spot light properties, as seen in Autodesk Maya, Blender, Blackmagic Fusion, and Adobe After Effects.

The most useful properties follow:

Transforms Lights generally carry transforms that include translation (position), rotation, and scale. Depending on the light type, some of these transforms have no impact on the light quality. For example, translation affects a point light while rotation affects a directional light. See the previous section for additional examples.

Intensity / Energy Lights include a property to control the light’s brightness. If this property is set to 0, the light is essentially turned off. Some programs, such as Autodesk Maya and Blender, support negative light values that reduce the strength of other, overlapping lights.

Color Lights include a place to change the light color, which allows the light to mimic real-world light sources with different wavelengths. For example, you might change the light color of a key to orange to mimic a sunset. Light color is generally multiplied by the light intensity, so that both properties affect the overall light strength. For example, if the light intensity is 1.0 but the light color is set to 0.5 gray, the end strength is 0.5 (1.0 × 0.5). Light color properties are usually defined in RGB, where there is a value for red, green, and blue. Some lights accept color temperature values measured in kelvin.

Shadows Many lights can cast shadows and offer a means to activate the shadows. There are several different methods of generating shadows in 3D programs—these are discussed in the next section.

Decay Rate / Decay Type / Attenuation Some lights support light decay and different methods to control the rate of decay (see the previous sidebar). In addition, a Falloff Distance / Distance property may be included so you can set the distance at which the decay begins.

Emit Specular / Emit Diffuse Some lights allow you to turn on or off the lighting functions for diffuse lighting calculations and specular lighting calculations. In this situation, diffuse refers to diffuse reflectivity, where light reflects from the surface, thus making the surface color visible. Specular refers to specular reflections that create specular highlights, which are cohesive reflections of bright light sources. Specular calculations are necessary to create reflections when ray tracing.

Different light types may carry different sets of properties. For example, spot lights include additional properties to control their light cones:

Cone Angle / Hotspot This sets the width of the cone.

Penumbra Angle / Cone Feather This sets the width of the transition for maximum intensity to 0 intensity at the cone edge, as seen on surfaces the light strikes. In general, if this property is positive the softness extends outwards; if this property is negative, the softness extends inwards.

Cone Falloff / Dropoff Some spot lights include properties to control the light decay from the light center to the cone edge. This functions independently of the penumbra angle.

Barn Doors / Square Some spot lights carry the option to turn the circular cone into a rectangular one. This is apparent when the light creates a rectangular light pool on surfaces it strikes.

Note that some terms may have different connotations in different programs. For example, in 3ds Max, Falloff sets the size of the penumbra angle while in After Effects, Radius set the penumbra angle and Falloff controls the light decay.

Shadow Variations Among Lights

Shadows are a critical part of lighting. Every real-world light source produces shadows when encountering opaque or semi-opaque objects. As discussed, a shadow is an area that receives no light. In fact, the shadowed area is a three-dimensional volume behind the object opposite the direction the light is traveling. That said, we usually encounter two-dimensional shadows on surfaces as the atmosphere is transparent (left side of Figure 3.15). However, if fog, smoke, haze, or similar participating media (suspended particles that reflect light) is present in the air, you can see the three-dimensional shadow form (middle and right side of Figure 3.15).

fig3_15
Figure 3.15 Left: A clearly-defined two-dimensional shadow is cast on the ground by the sun. Middle: Three-dimensional shadows are revealed as the sun casts shadows through haze surrounding trees. The shadows appear as dark streaked lines. Right: A three-dimensional shadow is formed underneath a sphere when 3D fog and a shadow-casting spot light is used.

Left photo: Copyright kasto / 123RF Stock Photo. Middle photo: Copyright Michal Boubín / 123RF Stock Photo.

The quality of a shadow varies with the type of shadowing light. This is true whether the light is in the real world or in a 3D program. For example, the real-world sun and 3D directional lights produce parallel shadows (Figure 3.16). Note that the parallel quality may be difficult to detect in the real-world due to perspective (as is the case with the middle photo in Figure 3.15). Technically speaking, the sun is an omni-directional light source, sending light rays in all directions out into space; however, only a narrow band of rays reach earth, making them, for practical purposes, parallel.

fig3_16
Figure 3.16 Left: Parallel shadows cast by the sun. Right: Parallel shadows cast by a directional light.

Left photo: Copyright Ruud Morijn / 123RF Stock Photo.

Real-world spot lights and unfocused light bulbs produce oblique (non-parallel and non-perpendicular) shadows, as do their 3D spot light and point light counterparts (Figure 3.17). Note that the resulting 3D shadows may be hard-edged unless special steps are taken. The hard edge quality is a result of the spot or point light producing light rays from a single point in space. This prevents the overlapping of rays at the shadow edge. To create shadows with soft edges or edges that degrade over distance, you must use a broad light. If you are using ray trace shadows, you can emulate this by increasing the light’s width or radius property.

fig3_17
Figure 3.17 Left: Oblique shadows cast by a spot light with a wide cone. Middle: The same ray trace shadows are given a large radius property value, allowing the shadows to become softer over distance. Right: The spot light’s cone angle is reduced and the light is moved farther away from the cylinders, thus creating longer and less oblique shadows.

Alternatively, to create shadows that soften over distance, you can use an area light. Area lights, due to their array-like structure with myriad light rays that overlap, produce soft-edged shadows by default. The edge quality changes based on the size of the area light and its distance from the point being shadowed (Figure 3.18).

fig3_18
Figure 3.18 An area light, set close to the cylindrical objects, creates shadows that soften over distance. Note the degree to which the shadows diverge from each other. If the area light was placed farther away, the shadows would become more parallel.

Shadow Approaches

When it comes to generating shadows, there are several methods employed by 3D programs:

Depth Map This shadow type renders a depth map from the point-of-view of the shadow-producing light. The distances of objects from the light are encoded as scalar values and appear in grayscale (Figure 3.19). The depth map is used by the renderer to determine what lies in shadow and what does not. Depth map shadows have the advantage of being efficient; however, they have quality limitations as they are dependent on a rendered map with a fixed resolution. They also allow for less control of the shadow edge, where the shadow edge is equally hard or soft along its entire length. Many light types are able to produce depth map shadows as the shadows are perhaps the most common shadowing method provided by 3D programs. Depth maps are also referred to as depth buffers or Z-buffers. Depth map shadows generally have two main properties: resolution, which sets the depth map bitmap size, and a second property to filter the shadow edge and control its hardness or softness. The name of the second property has many variations—for example, filter size, sample range, and softness.

fig3_19
Figure 3.19 A depth map of a latticed cube, which is a grayscale view from a shadow-casting light. With this variation, surface points closer to the light are rendered darker.

Graphic by Dominicos licensed under the Creative Commons Attribution-Share-Alike 3.0 Unported (CC BY-SA 3.0).

Ray Trace This shadow type is supported by ray tracing renderers. Ray tracing traces the path of rays through a camera image plane until they intersect 3D surfaces. The rays reflect off the surfaces, transmit through surfaces, or are absorbed by the surfaces (the ray killed off), based on the properties of the shaders assigned to the surfaces. Hence, ray tracing is able to create accurate reflections and refractions. Ray trace shadows are generated by shooting shadow rays from the point of a surface intersection to a shadow-casting light. If the shadow ray encounters a surface on the way to the light, then the original surface point is known to be within a shadow (Figure 3.20). Ray trace shadows are more computationally expensive than depth map shadows, but are generally more accurate. In addition, they are not dependent on an arbitrary pixel resolution. Note that ray trace renderers work in a direction opposite the real-world. In the real-world, photons are generated by a light, reflect off surfaces, and eventually reach the camera sensor or viewer’s eye. Nevertheless, the backwards ray tracing method is mathematically efficient as it does not waste energy on light rays that reflect off surfaces and never reach the camera due to their unique vectors. For an example of 3D ray trace shadows, see Figures 3.16 and 3.17 in the previous section. Ray trace shadow functions generally include properties to control the virtual width or radius of the shadow-casting light (sometimes called radius, angle, spread, or soft size) and overall quality (sometimes called shadows rays, samples, or quality).

fig3_20
Figure 3.20 A simplified representation of ray trace rendering and ray trace shadowing. Graphic by Henrik licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).

PBR (Physically Based Rendering) I’ll use this category to describe advanced lighting and rendering systems that take into account light bounce and color bleed. These systems produce shadows automatically as light rays and/or virtual photons are traced through a scene, accounting for reflections, transmissions, and absorptions when encountering surfaces. As such, specific shadow-casting lights are not required (although the systems also support depth map and ray trace shadows). However, if photon-tracing is necessary, 3D lights that can generate photons are required. These rendering systems are discussed in more detail in Chapter 5.

Renderer Variations 3D renderers often offer specific variations of common shadow types with additional controls to add more flexibility, efficiency, or render quality. For example, Solid Angle Arnold provides extra light controls that define shadow quality. The host 3D programs include documentation for using these variations.

“Shadow owes its birth to light.”—John Gay

Specialized 3D Light Types

In addition to spot, point, ambient, directional, and area lights, other specialized 3D lights may be provided by a 3D program. A few of these are discussed here:

Mesh Some 3D renderers are able to convert user-defined geometry into a light source. This may be useful for emulating a frosted light bulb, translucent lamp sconce, a fluorescent bulb formed into a ring, or a complex neon tube (Figure 3.21). In this situation, light rays are shot out from the geometry surface at perpendicular angles (along the surface normals).

fig3_21
Figure 3.21 A curly, polygonal tube is converted to a light source by the Arnold renderer in Autodesk Maya. By default, quadratic light decay is used. Soft shadowing is intrinsic to the geometric light source due to a multitude of overlapping light rays.

Cylindrical This light type takes on the form of its namesake. It’s related to a mesh light in that it acts like an array of directional lights aligned to the cylinder surface normals. You might use this light to emulate a straight fluorescent bulb.

Environment This light type is presented as a 180 or 360 degree sphere. The sphere acts as an array of directional lights that point inward. You can use the light to emulate a daytime sky. The light may be considered to exist infinitely far away or, if the option is present, can be used with specific scale and translation within the 3D scene. If you are employing an advanced lighting or rendering system, you can base the light intensity and color on a bitmap texture that is mapped to the sphere. Environment lights may be included with the standard light set or may be provided by the renderer as a special shader (Figure 3.22). 180 degree environment lights are also known as hemi, dome, skydome, or skylight lights. This light type is demonstrated in Chapter 5.

fig3_22
Figure 3.22 Left: An NVIDIA Iray IBL (Image-Based Lighting) sphere surrounds a primitive cube. The IBL shape is provided by a light shader and is designed to derive intensity and color information from a mapped bitmap image. The sphere is designed to surround the scene to be lit. Middle: Same IBL sphere with viewport shading turned on, revealing the mapped bitmap image. Right: A hemi light icon appears above a primitive cube in Blender. The light illuminates anything below its umbrella-like half-sphere shape. The hemi light is one of the standard Blender light types.

Photometric This light type is designed to be physically accurate by using data from real-world lights (Figure 3.23). Photometric lights reproduce light intensity over distance and light spread, which is the light pattern a specific light bulb makes when housed in a particular enclosure (e.g. wall sconce, flashlight housing, lamp housing, and so on). Photometric lights receive their data from imported IES (Illuminating Engineering Society) profile text files. The files are made available to the public by light bulb and light housing manufacturers. Photometric lights are useful when recreating real-world locations or as part of an architectural design process.

fig3_23
Figure 3.23 Three side-by-side photometric lights using three different IES profiles create three distinct light spreads. The IES files carry data derived from specific real-world light bulbs and housings.

Volume These lights are similar to environment lights in that they illuminate any surface within their volume shape (Figure 3.24). The most common volume shape is a sphere. Volume lights offer an alternative to using light decay to control where the light’s illumination reaches within a scene. Note that volume lights and volumetric lighting are not strictly the same. Volumetric lighting allows light to be seen within a 3D volume. You can create volumetric lighting by activating light log, environmental fog, or environmental light scattering, which simulates participating media. These options may be available through a particular light type, such as a spot light, with a specialized shader, or through a renderer.

fig3_24
Figure 3.24 Left: A volume light intersects a primitive sphere and sits above a primitive plane. Right: With the resulting render, only the part of the sphere that intersects the volume light receives illumination. The plane remains unlit because it does not intersect the light.

Renderer Variations Some renderers offer their own sets of lights. Although the lights often take the form of common light types (area lights, spot lights, and so on), they are designed to create more accurate lighting results. Some of these light types support PBR techniques. PBR is discussed in detail in Chapter 5.

3D Light Interaction with Shaders

3D lighting is dependent on 3D surfaces. The surfaces are provided by 3D geometry that has been assigned to shaders. Shaders mathematically define surface qualities, which include the surface’s color and whether the surface is matte-like, glossy, reflective, refractive, smooth, and/or bumpy (Figure 3.25). When lighting, it’s important to understand how the assigned shaders are functioning. In fact, it may be necessary to adjust shader properties to produce the best lighting results.

fig3_25
Figure 3.25 Icons for shaders discussed in this section, as seen in Autodesk Maya.

Any given 3D program provides a number of different shaders, each with different strengths, weaknesses, and different sets of properties. Brief descriptions of common shaders follow. Note that shaders are sometimes referred to as materials.

Lambert This shader type is simple, offering a diffuse color property (left side of Figure 3.26). Lambert shaders are designed to mimic matte surfaces, where light is scattered diffusely—that is, in a random fashion due to tiny surface imperfections. Real-world Lambertian surfaces include paper and cardboard. Note that Lambert shaders are unable to support reflectivity due to the lack of specularity. Lambert shaders are named after Johann Heinrich Lambert (1728–1777), who studied light diffusion. Oren-Nayer shaders are also designed for diffuse surfaces but increase the resulting accuracy and are ideal for creating powdery surfaces.

fig3_26
Figure 3.26 Left: Model assigned to Lambert shader. Middle: Model assigned to Phong shader. Right: Model assigned to Blinn shader. You can adjust Phong and Blinn shaders so they are virtually identical. Note that the reflectivity for the Phong and Blinn shaders has been set to 0.

Phong / Binn Phong shaders combine several basic surface properties, including diffuse color and specularity (middle of Figure 3.26). The shader creates specular highlights, which appear as bright “hot spots” that mimic the intense reflections of bright light sources. Ray tracing is not required to create the specular highlights, as they are only an approximation of specular reflectivity. Blinn (or Blinn-Phong) shaders build upon the Phong model with increased mathematical efficiency and their own variation of specular highlight controls (right side of Figure 3.26). Phong and Blinn shaders can emulate a wide range of real-world materials; however, they are not physically accurate. Phong shaders are named after Bùi Tuòng Phong and Blinn shaders are named after Jim Blinn, both 3D researchers and programmers.

Cook-Torrance This shader type increases the realism of specular highlights by adding a Roughness property that represents microfacets on a surface. Microfacets are tiny surface imperfections. If microfacets are aligned, a surface appears shiny. If the microfacets are randomly oriented, the surface appears diffuse and matte-like. The shader type is named after Robert L. Cook and Kenneth E. Torrance.

Fresnel This shader type employs more accurate reflections by taking into account the Fresnel effect. The effect occurs in the real world, whereby the strength of a reflection is dependent on the viewing angle (Figure 3.27). The effect is particularly evident with transparent surfaces. If the viewing angle is large, the reflection is strong. You can see this phenomena when looking across a lake (Figure 3.28). If the viewing angle is small, the reflection is weak. You can see this phenomena when looking down at the edge of the same lake. Where the reflection is weak, you can see the lake bottom due to light transmission and refraction through the water (see the next section for information on refractive indexes). Note that a reflected light ray shares the same angle between the surface normal as does the incoming, incident light ray (Figure 3.29). Hence, when reflections occur on smooth surfaces such as mirrors or calm water, the reflections are not distorted and do not change in scale. See Figure 3.34 in the next section for an example of a 3D Fresnel render.

fig3_27
Figure 3.27 A viewing angle, θ, is the angle between the surface normal and a vector drawn from the viewer to the surface.
fig3_28
Figure 3.28 Fresnel reflections appear on a calm lake.

Photo copyright Sara Winter / 123RF Stock Photo.

fig3_29
Figure 3.29 The angle between the arriving, incident light ray and the surface normal is identical to the angle between the surface normal and reflected light ray.

A shader that supports Fresnel reflections is not necessarily named Fresnel. However, a shader that supports such reflections will carry several properties that determine if Fresnel reflections are off or on and what the reflectance strength is when the viewing angle is perpendicular to the surface normal (90 degree) and/or parallel to the surface normal (0 degrees). Fresnel shaders are useful for emulating transparent surfaces such as glass or water. This shader type is named after physicist Augustin-Jean Fresnel (1788–1827), whose mathematical work on light has become known as Fresnel equations.

Ward / BRDF A Ward shader uses BRDF (Bidirectional Reflectance Distribution Function) to determine how much light energy is reflected off a surface. In the real world, the amount of light energy reflected from a surface is equal to or less than the energy carried by the incident light ray (this is known as the conservation of energy). A Ward shader generally includes a Slope property that emulates microfacets and determines if the surface appears matte-like or shiny. The Slope property allows for extremely small and sharp-edged specular highlights. Note that BRDF creates its own form of view-dependent reflectivity, much like a Fresnel shader. BRDF and Fresnel properties may be included within the same shader, although they are generally not used together. The Ward shader is named after Gregory J. Ward. See Figure 3.34 in the next section for an example of a BRDF 3D render.

Anisotropic When the Ward shading method is included in a complex shader, it’s often the isotropic variation. As such, the rotation of the surface in reference to the camera does not change the shape of the specular highlight. However, when the Ward anisotropic variation is employed, the surface rotation changes the highlight shape. This shader type adds X and Y properties to control the orientation of the highlight. Anisotropic shaders can produce elongated highlights and are suitable for surfaces such as brushed metal, grooved plastic, hair, feathers, and rippled water (Figure 3.30).

fig3_30
Figure 3.30 Left: Anisotropic specular highlights appear as light bands in dark hair. Right: A 3D render creates anisotropic highlights appropriate for a grooved metal disc. Note that the highlights are perpendicular to the direction of hairs and grooves.

Left photo: Copyright victorias / 123RF Stock Photo.

SSS (Sub-Surface Scattering) SSS shaders create translucence by diffusely scattering light through a surface. In contrast, transparency allows light to pass through a surface without scattering (although there is a change in light speed, which is discussed in the next section).You can see translucence when you place a light behind skin, soap, wax, plastic, and paper (Figure 3.31). SSS shaders replace the diffuse color component of a render with averaged surface values from a light map. A light map is a special bitmap texture that stores surface brightness information.

fig3_31
Figure 3.31 Left: A rice paper screen is translucent, allowing light to scatter diffusely through its surface. A shadow of a tree blocks some of the light. Right: A 3D render of a soap bar uses a SSS shader. The only light lies behind the geometry out of view of the camera.

Left photo: Copyright Hans Slegers / 123RF Stock Photo.

Monolithic and Layered Monolithic shaders combine a set of basic shader algorithms within a single shader structure. For example, the Arnold Ai Standard Surface shader utilizes Cook-Torrance, Ward anisotropic, and SSS shader functions (among others). The goal of a monolithic shader is to provide the maximum amount of control to produce physically-accurate shading results. A layered shader, such as the NVIDIA mental ray MILA, is monolithic but allows the user to pick and choose which shader functions are available; for example, the user can choose to ignore refractivity or select between direct and indirect (bounced) diffuse lighting contributions. Note that some 3D programs allow you to choose different shaders to define the diffuse color and specular highlight. For example, in Blender you can choose an Oren-Nayer, Fresnel, or Lambert shader to define the diffuse color and a Blinn, Phong, Cook-Torrance, or Ward isotropic shader to define the specular highlight.

Common Shader Properties

Despite the wide array of available shaders, many shaders share common properties. These are described here.

Diffuse Color Sets the base color of the surface. The base color is the color of the surface without reflections under white light. You can map this property with a bitmap texture to create color variations across the surface. The property controlling diffuse color is often named Color. However, some shaders carry a separate Diffuse property that controls the degree of diffuse light scattering, where some scattered light does not return to the camera or viewer. The lower the Diffuse value, the more light is randomly reflected away from the camera and the darker the surface appears. The higher the Diffuse value, the less light is randomly reflected away from the camera and the brighter the surface appears. Diffuse color is referred to as albedo with PBR systems; albedo does not include lighting information.

Ambient Color Determines the color of the surface when lit by ambient light but no direct light. In other words, this is the color of the surface that is in shadow. This property is usually black by default, which equates to ambient light with 0 intensity. However, you can change the color to non-black color to emulate ambient light arriving from all points in a scene.

Specularity / Glossiness / Metal Shaders that support specularity provide a set of specular properties that control the size, intensity, and color of the specular highlights or specular reflections. Alternatively, the shader may provide a glossiness set of properties. Glossiness defines the blurriness or sharpness of reflections based on the smoothness or roughness of the reflecting surface (Figure 3.32). Glossiness properties provide more realism than specular properties but are dependent on ray tracing. In addition, some shaders provide a Metal or Metalness property that uses the diffuse color or a special metal color map to tint reflections—this simulates reflective qualities common to metallic surfaces.

fig3_32
Figure 3.32 Left: A dark, reflective surface is given a high glossy value, creating a coherent reflection. Middle: The same surface is given a medium glossy value, blurring the reflection. Right: The same surface is given a low glossy value, making the reflected object unrecognizable.

Transparency Controls the opacity of the surface. In general, a value of 1.0 or a white color is 100 percent transparent and a value of 0 or a black color is 100 percent opaque. Transparency is necessary for transmission and refractivity but does not activate those functions.

Reflectivity This property sets the degree to which the surface reflects light. A high reflectivity value creates a bright reflection. For this property to function, the scene must be ray traced by the renderer. Note that PBR systems may use the term diffuse reflectivity to refer to scattered, refracted light, some of which re-emerges from the surface. As such, specular reflectivity refers to all reflections occurring when light reflects off a surface but does not penetrate it, whether the reflection is glossy or non-glossy.

Refractivity Controls light transmission for transparent and semi-transparent surfaces. The property, which is sometimes called refractions, is usually combined with a Refractive Index or IOR (Index Of Refraction) property. As discussed in Chapter 1, a refractive index is a numerical value that represents the change in speed of a light ray as the ray crosses an interface (boundary) between two materials (Figure 3.33). The materials may be air and water, air and glass, and so on. The change in light speed creates the illusion that the objects behind the transparent surface are distorted (Figures 3.34 and 3.35). The refractive index of a vacuum is 1.0, which indicates no change in speed. Air is close to 1.0 and is often rounded down to 1.0. Water has an index of 1.33 and glass has an index that varies between 1.2 and 1.4. Ray tracing is required for refractivity. For both refractivity and reflectivity, there are renderer properties to control the number of times that light rays are permitted to reflect off and/or transmit/refract through surfaces. Note that a refractive index property is equally useful for opaque surfaces that use Fresnel reflections. In this case, the refractive index value affects the percentage of light energy reflected back toward the camera or viewer.

fig3_33
Figure 3.33 When a light ray crosses a material interface and is transmitted through the second material, the light speed is altered and the ray is thus refracted.
fig3_34
Figure 3.34 Left: 3D render of spheres assigned to a reflective, refractive shader. The shader uses BRDF, allowing faint reflections of the blue sky to remain on the forward faces of the spheres. Right: The same shader is assigned but a Fresnel function is activated, preventing reflections from appearing on the forward faces. Due to the spherical shapes and a refractive index value of 1.4, the blue sky is flipped upside-down in the refractions. In order to produce semi-transparent cast shadows, you must use ray trace shadows or a specialized depth map shadow format that understands transparency.
fig3_35
Figure 3.35 Real-life reflections and refractions seen on a pair of wine glasses. Note that the glass of the glasses and the liquid of the wine each has its own unique refractive index. In reality, we only see transparent glass due to the reflections and refractions.

Photo copyright Darya Petrenko / 123RF Stock Photo.

Incandescence / Emissive Treats the surface as a light source. The illumination may be used to light other geometry if the function is supported by the renderer. For example, Iray, Chaos Group V-Ray, and Arnold renderers can use an emissive shader to light a scene (see Chapter 6).

Translucence Approximates sub-surface scattering but is not as computationally expensive as a SSS shader. In general, there are several properties that control the strength, focus, and virtual depth of the scattered light. A separate but related property is Backlighting, which assumes light has traveled through a surface from a side unseen by the camera to the side facing the camera. Backlighting creates the appearance of translucence but does not include additional controls to adjust the result.

Bump Mapping / Normal Mapping Create the illusion that a surface is rough or bumpy by perturbing surface normals at the point of render (Figure 3.36). Bump mapping uses a scalar (grayscale) bitmap texture to determine the locations of peaks and valleys of the bump. Normal mapping uses a special normal map texture to do the same. (Normal maps are often generated by comparing a high-resolution model and low-resolution model and encoding the difference of surface vertex positions as vector values in RGB.) For more information on texturing, see the sidebar at the end of this chapter.

fig3_36
Figure 3.36 Left: Raised letters are added to an otherwise smooth piece of 3D geometry with a bump map. Note that the letters react appropriately to light direction and gain their own specular highlights and self-shadowing areas. Right: The square texture bitmap used as a bump map. Only the white area of the letters appears as a bump. The location of the letters is based on the geometry’s UV layout, which controls how textures are applied to the surface.

Displacement Mapping / Parallax Mapping The goal of displacement mapping is similar to bump of normal mapping. However, displacement mapping distorts the surface at the time of render (Figure 3.37). Whereas a bump or normal map cannot affect the silhouette edge of an object, displacement mapping can change the shape of an object to a great degree. Displacement mapping uses a grayscale displacement or height map to determine how far to push or pull surface vertices. Parallax mapping, also known as offset mapping or virtual displacement mapping, is a more advanced form of bump and normal mapping that approximates the effect of displacement mapping without the cost of distorting geometry.

Note that some properties of shaders are intended to create special render passes to be used in 2D compositing programs. Render passes are discussed in Chapter 5.

fig3_37
Figure 3.37 Left: A primitive polygon sphere is assigned to a shader with a bump map. The edges and shadow are unaffected. Right: An identical sphere is assigned a shader with a displacement map, which alters the geometry edges and affects the shadow.

SIDEBAR
Texture Overview

Textures are an important part of the 3D process. Textures, which are procedurally generated by the 3D software or imported as a bitmap, add color variation and detail to otherwise solid-color shaders. In addition, you can link textures to a wide variety of shader properties to lend even more complexity to a surface. Texturing, as a phrase, refers to the process of assigning shaders to geometry and adding texture maps to the shaders. Bitmap and map refer to a 2D digital image. Mapping refers to the process of linking maps to shader properties.

Many of the 3D models illustrated in this book have texture maps assigned to the color properties of the assigned shaders. With the exception of bump maps, no other properties have been mapped. Although this limits the realism of the renders, it makes it easier to share the exercise files across a wide variety of 3D programs.

“Look beneath the surface let not the several quality of a thing nor its worth escape thee.”—Marcus Aurelius Antoninus

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset