When it comes time to light in a 3D program, it can be useful to follow organized steps to achieve aesthetic lighting in an efficient manner. The steps include determining what lights are needed and what types of shadows are necessary. It’s also useful to be familiar with common 3D shaders and how their basic properties function.
This chapter includes the following critical information:
Photo copyright Rungaroon Taweeapiradeemunkohg / 123RF Stock Photo.
Although lighting in a 3D program shares many similarities to lighting for the fine arts, stage, film, video, and photography, it requires its own unique workflow. When the workflow is handled by a team of animators, it’s often referred to as a pipeline. A pipeline is a standardized system of producing complex animations using teams of artists and an array of equipment and software.
For example, a pipeline created for an animated feature or extensive visual effects project generally follows these production pipeline steps:
1. Concept art: Characters, props, environments, and color guides are designed.
2. Storyboarding: The story is broken into specific shots with specific camera placements using 2D drawings or simplified 3D animations. If the 2D drawings are animated and edited into a video, the result is referred to as an animatic.
3. 3D modeling: Characters, props, and environments are constructed in 3D.
4. Texturing: Models are textured.
5. Rigging: Models that require animation are rigged so they can be moved or deformed.
6. Layout: The storyboards are translated to 3D set-ups using the 3D models.
7. Animation: Characters, props, and effects (such as fire and water) are animated.
8. Lighting: Animated shots are lit and rendered.
9. Compositing: If required, shots are composited in 2D to combine and/or fine tune the renders.
Depending on the studio and the scope of the production, some of these steps may overlap or happen simultaneously. In addition, all of these steps require testing and revisions. Compositing may be the domain of the lighter or it may be handled by a separate compositing department.
If an animation is small in scope, one animator may handle multiple tasks. For example, on an independent animation or a commercial production with a limited number of shots, one animator may model, texture, rig, animate, light, render, and composite everything needed for one shot.
Whether or not an animation is part of a large or small project, it pays to follow certain steps when lighting. These include the collection of light information, an organized approach to placing and adjusting 3D lights, and an efficient method of test rendering. To help facilitate lighting, it helps to understand the difference between light and shadow types and basic shader functionality.
The first step to 3D lighting is to determine how many lights you need, where the lights should be located, and what properties the lights should possess. To derive this information, ask yourself these questions:
What is the context of the lighting? Are you lighting a single, standalone shot or are you lighting one shot of a multi-shot scene? Are you lighting all the shots in the scene or do you need to match your work to other lighters who are undertaking surrounding shots? The answer to these questions determines what reference is required and if any precedents have been set for the lighting you are working on. For example, on a feature animation, you may need to match shots lit by other lighters and follow general guidelines set by the art department through concept art and color guides.
Are there any special considerations? Is there an impetus to light stylistically? Is there a need to light in a special way to communicate story information or establish a mood? These considerations may affect the answers of additional questions.
What is the location of the lighting? Is the location on Earth? If so, the lighting should follow basic qualities of light in Earth’s atmosphere. If not, light may react differently. For example, if you are lighting a scene in space, the lack of air molecules or particulate matter, such as water vapor, prevents a light beam from forming through light scatter (Figure 3.1). This is not to say that lighting should always be scientifically accurate, but lighting generally needs to be perceived as appropriate for a particular location. If the plan is to light stylistically, the precision of the light sources may not be critical. For example, numerous science-fiction films show laser beams and terrestrial explosions occurring in space. Nevertheless, stylistic lighting has its own aesthetic concerns, which are discussed later in this book.
What is the location more specifically? Is the location inside? For example, the location may be in a bedroom, a car, or a cavern. Conversely, is the location outside? The specific location affects what lights are available and expected to be present by the viewer.
Does the location exist? Does your lighting need to replicate a real location? As such, do you possess reference in the form of photos or video? If your location does not exist, does it need to match a similar location?
What is the time of day? Is it sunrise, noon, afternoon, sunset, evening, or nighttime? If you combine this information with the location, you can determine what light sources would be generally available at the location in the real world. For example, if the location is a city street, the main source of light during the day is the sun (Figure 3.2). At night, the source of light may be the moon, street lamps, car headlights, electric signs, and so on.
What is the time period? Does the lighting scenario take place in the present? If not, what is the historical time period in which the lighting occurs? The year in which the scene takes place affects the list of available lights. For example, a pre-20th century setting prevents the existence of incandescent light bulbs and requires more extensive use of fire-based lights, such as candles, oil lamps, torches, and so on. Because each type of light has distinct properties, this affects the way in which you light within the 3D program. If the time period has no historical basis or is derived from an imaginary alternative history, you can mix and match light types. For example, you may insert artificial light sources into a location where they would otherwise be historically unavailable. By the same token, you can invent light sources if historical accuracy is not needed. For example, light may arrive from the magic staff of a wizard.
What are the properties of the light sources? After you’ve determined what light sources exist, you can define the lights’ specific properties. For example, the lights may be natural or artificial. Natural light may include the sun, sunlight reflected off the moon, a fire, or a candle. Artificial lights come in many forms, such as incandescent light bulbs, fluorescent light fixtures, neon signs, LED light arrays, and amplified bulbs contained in flashlights or headlamps. Each of these light types possess a real-world color (courtesy of a specific wavelength), intensity (brightness), focus (parallel, oblique, or random light rays), and shadow type (hard-edged, soft-edged, or hard-to-soft over distance).
To practice the technique of determining light sources, you can study existing photographs, film and video stills, or paintings. For example, Figure 3.3 represents an example of simple, real-world 2-point lighting. Questions and answers are included.
What is the location? An office.
What is the location more specifically? The center of a window-lit room.
Does the location exist? Yes.
What is the time of day? Daytime. The time may be mid-morning or mid-afternoon based on the longish shadows cast by the people in the background. A time closer to noon would create shorter shadows. A time closer to sunrise or sunset would create longer shadows with dimmer, more saturated light.
What is the time period? We can assume this is present day as there are no obvious anachronisms.
What are the properties of the light sources? The light sources are as follows:
A) The sun as a key light. This light arrives from the bank or large windows at the left of frame. The sunlight arrives at a roughly 45 degree angle, creating longish shadows. The sunlight is also angled toward the man so that the light reaches the front of his face, forming split lighting. The light, by the time it reaches the man, is a soft light that does not cast a strong shadow on the man himself. The softness may be a result of the sunlight scattering through curtains or multiple, semi-obscured windows.
B) Fill light. The weaker, secondary light source arrives from the lower-right side of the frame. This is sunlight that has bounced off the floor and surrounding walls. There are no other visible light sources, so we can safely assume that the bounced sunlight is the sole secondary source.
As a second example, we’ll look at a complex, fantastic, multi-point lighting (Figure 3.4).
What is the location? Outdoors, near the outskirts of a city.
What is the location more specifically? Near the ruins of a church.
Does the location exist and what is the time period? Possibly. This may represent the Flemish homeland of the painter as it may have typically looked in the late Medieval time period (minus the monsters and conflagrations).
What is the time of day? Daytime, as indicated by the sky at the upper-right of the painting. However, the raging fire in the background has created so much smoke that part of the landscape is thrust into virtual night.
What are the properties of the light sources? The light sources are as follows:
A) The sun as a key light. A broad, generic light arrives from the center-left of the painting. This is indicated by shadows underneath and around the central characters as well as the general shading of the humans and monsters (where the upper-left side of many characters receive the most light).
B) Fill light. The weaker, secondary light source arrives from the lower-right side of the frame. Lacking other identifiable source, this is sunlight that has bounced off the ground and other nearby surfaces. You can see this light on the characters that are painted with greater contrast.
C) Background fire as a utility light. Although the fire does not affect the lighting of the foreground, it illuminates (and silhouettes) background buildings.
D) A light beam as a utility light. The narrow beam appears in the interior of a church. Although this could be the focused light of the sun, it seems out of place in this scene and is therefore stylistic.
E) Interior lights as utility lights. The open doors and windows of the background buildings appear illuminated from within. Keeping the time period in mind, you can assume the light is generated by fireplace fires, torches, or oil lamps. Alternatively, the raging fire that plagues the city may have reached the building interiors.
After you’ve determined how many lights you need, where the lights should be located, and what basic properties the lights possess, you can add them to your 3D scene. Regardless of which 3D program you are using and the exact method by which to create and manipulate the 3D lights, I suggest following these basic steps:
Keep in mind that this is a general guideline and may not work in every lighting situation. Here are a few caveats:
An important part of the 3D animation process is rendering. When it comes to 3D lighting, you must render to create a final version of the frame or frames. You also have to render to test and adjust your lighting set-up. Although each 3D program offers its own set of features for rendering, here are a few things to keep in mind as you work:
In the realm of digital imaging, color calibration is an important consideration. Digital imaging includes digital video, photography, art, graphic design, and 3D animation. All of these digital art forms are reliant on an RGB color model that combines different intensities of red, green, and blue to represent the full range of colors. When a digital image is created, it’s created within a color space, which is a range of colors the image can potentially store. When the image is displayed, however, it may encounter a device that utilizes a different color space. To complicate matters, different devices use different color spaces, which may lead to the image looking different on each device (devices will have a limited color range for one or more of the color channels within the color model). For example, a rendered 3D animation may look one way on a computer monitor and another way on a broadcast television. Color calibration attempts to neutralize this problem by adjusting the various devices to make the image appear consistent.
Color calibration can occur on the device level. For example, you can color calibrate your computer monitor using the operating system or specialized calibration software. It’s also possible to apply color calibration within the software creating the digital images. For example, some 3D programs allow you to activate color calibration so that your work is suitable for output to a particular device (such as a television screen or theater projector). There are generally two different ways to apply calibration in a 3D program:
View Transform Allows you to activate a color space transform in a render viewport, window, or buffer. Activating a view transform does not affect the render’s inherent RGB values. Instead the transform temporarily converts the native color space to a different color space. For example, most computer systems operate within a sRGB color space. The view transform can convert sRGB to Rec. 709, which is the color space of HDTV (High-Definition Television). Note that some view transforms include a gamma adjustment. Gamma, in the realm of digital imaging, is a power function that adjusts displayed images so that they appear correct for human vision. Gamma-adjusted images appear to have greater contrast and a greater range of values.
Renderer Transform As opposed to temporarily applying a view transform to the render viewport, window, or buffer, you can choose to apply a color space transform at the point of render so that the rendered images are created within the new color space. This option is suitable if you are rendering for a particular set of devices, such as theater projectors.
When discussing color calibration, it’s also important to consider linear color space. Linear color space is a color space where gamma adjustment is not applied. Working in a linear color space allows for a more accurate representation of image values. Although not mandatory for successful lighting, linear color space may be required for accurate results when working with PBR (Physically Based Rendering) systems. Many 3D programs support a linear color space work environment. Switching to such a work environment generally affects the way in which texture bitmaps are interpreted and the way in which renders are displayed and exported. That said, the linear color space provided by a 3D program does not affect the color space used by the display device. Hence, using linear color space demands careful set-up and consideration. Using linear color space incorrectly may lead to inaccurately-lit renders that will require additional manipulation outside the 3D program.
Much like their real-world counterparts, you can position and aim 3D lights in a 3D program that supports lighting. In addition, you can adjust basic properties such as intensity, color, and shadow quality. Although each 3D program represents its 3D lights in a slightly different way (Figure 3.7), lights within these programs share many common traits, which are discussed in this chapter. Hence, you can apply this knowledge to a broad array of programs, including Autodesk Maya, Autodesk 3ds Max, Autodesk Softimage, MAXON Cinema 4d, and Blender. This also applies to 3D programs that are designed for specialty tasks such as sculptural modeling, effects simulation, industrial design, architectural visualization, and video game design. In addition, the common traits are carried over into 2D compositing programs that offer a 3D environment, such as The Foundry Nuke, Blackmagic Fusion, and Adobe After Effects.
3D programs generally include a set of common lights that include the following:
Spot light This type of light is named after the real-world counterpart used in film, video, and stage lighting. Its light emanates from a point in space but quickly diverges over distance. The light rays are oblique (neither parallel or perpendicular), creating a light cone. The cone indicates the outermost vectors of the light rays. At the cone edge, the light intensity drops to 0. When the light hits a surface, it forms a circular or oval spot of light (Figure 3.8).
You can adjust the cone width to increase or decrease the lit area. Additional spot light properties, such as penumbra angle, control the rapidity with which the light transitions from a non-0 to 0 intensity. For example, a large penumbra value causes the spot of the light to have a soft edge. A 0 penumbra value causes the spot to have a hard (non-soft) edge. The cone of a spot light is usually included as part of the 3D light icon (Figure 3.8). The position and rotation of a spot light icon affects the quality of the light. (Note that position is generally referred to as translation within 3D programs).
Point light The light of this type emanates from a point in space. The light is omni-directional; that is, the light rays fan out in all directions (Figure 3.9). You can use this type to emulate position-specific artificial light sources like light bulbs or position-specific natural light sources like candles. The translation of the point light icon affects the light quality but light’s rotation has no impact. Point lights are sometimes called omni lights. Note that some renderers treat point lights like emissive spheres, where they have a distinct size in XYZ space; this variation is more realistic and more accurately matches similar real-world light sources.
Directional light This type creates parallel rays of light as if arriving from an infinitely distant light source. You can use this light to emulate the sun or moon. The rotation of a directional light affects its light quality but its translation has no impact. The light icon indicates the direction the light rays are traveling through the use of arrows or directional lines (Figure 3.10). Directional lights are sometimes referred to as sun, infinite, or direct lights.
Ambient light This light type produces an intensity that is equal at all points in the scene (Figure 3.11). Hence, the light’s translation does not affect the light quality (in fact, some programs do not bother to create an ambient light icon). Ambient lights are sometimes used as weak fill lights. Ambient lights are sometimes called radial or flat lights. In general, ambient lights have technical limitations, such as the inability to cast shadows or interact with specific shader functions, such as bump mapping.
Area light This light type is defined by a rectangular shape and acts like an array of diverging directional lights or a cluster of point lights generating rays in a 180 degree sphere. Area lights don’t create rectangle spots or light pools due to the overlapping of light rays (Figure 3.12). That said, the larger the area light, the softer the edge of the light pool. You can use area lights to emulate broad light sources that are nevertheless confined by a shape. For example, you might use an area light to recreate the light arriving through a window, light arriving from a theater marquee, or light generated by a computer monitor. This is one of the few light types that is affected by scale changes. For example, you can use an area light to emulate a neon tube by making the light icon long and narrow. (In general, only X and Z scale changes alter the light.) Area lights are also referred to as light boxes. Note that some renderers support area lights with different shapes including discs and spheres.
Thus far, descriptions of lights assume that there is no light decay. Light decay represents the rapidity with which the light transitions from maximum intensity to 0 intensity over distance. This is also known as light falloff or light attenuation. These terms are somewhat confusing as real-world light does not lose energy or disappear as it travels through a vacuum. Instead, light and other electromagnetic radiation diverges from its source and thus, over distance, causes there to be less radiation in any given point in space as the radiation is spread out over a greater and greater area.
If light decay is not activated for a 3D light, the light’s intensity fails to change over distance and is as strong at 10 units as it is at 1,000,000 units. If light decay is activated, there is generally a way to control the decay rapidity. For example, you might switch to a mathematical decay formula such as quadratic. Quadratic decay uses an inverse square law, which describes light and other electromagnetic radiation in the real-world whereby the radiation intensity is inversely proportional to the square of the distance from the radiation source (left side of Figure 3.13). The law uses the formula light intensity = 1/distance2. In contrast, a linear or constant decay may be considered stylized as its formula is intensity = 1/distance (right side of Figure 3.13). More aggressive decay may use a cubic formula where intensity = 1/distance3.
Although the properties that control 3D lights in various 3D programs may have different names, their basic functionality remains the same (Figure 3.14).
The most useful properties follow:
Transforms Lights generally carry transforms that include translation (position), rotation, and scale. Depending on the light type, some of these transforms have no impact on the light quality. For example, translation affects a point light while rotation affects a directional light. See the previous section for additional examples.
Intensity / Energy Lights include a property to control the light’s brightness. If this property is set to 0, the light is essentially turned off. Some programs, such as Autodesk Maya and Blender, support negative light values that reduce the strength of other, overlapping lights.
Color Lights include a place to change the light color, which allows the light to mimic real-world light sources with different wavelengths. For example, you might change the light color of a key to orange to mimic a sunset. Light color is generally multiplied by the light intensity, so that both properties affect the overall light strength. For example, if the light intensity is 1.0 but the light color is set to 0.5 gray, the end strength is 0.5 (1.0 × 0.5). Light color properties are usually defined in RGB, where there is a value for red, green, and blue. Some lights accept color temperature values measured in kelvin.
Shadows Many lights can cast shadows and offer a means to activate the shadows. There are several different methods of generating shadows in 3D programs—these are discussed in the next section.
Decay Rate / Decay Type / Attenuation Some lights support light decay and different methods to control the rate of decay (see the previous sidebar). In addition, a Falloff Distance / Distance property may be included so you can set the distance at which the decay begins.
Emit Specular / Emit Diffuse Some lights allow you to turn on or off the lighting functions for diffuse lighting calculations and specular lighting calculations. In this situation, diffuse refers to diffuse reflectivity, where light reflects from the surface, thus making the surface color visible. Specular refers to specular reflections that create specular highlights, which are cohesive reflections of bright light sources. Specular calculations are necessary to create reflections when ray tracing.
Different light types may carry different sets of properties. For example, spot lights include additional properties to control their light cones:
Cone Angle / Hotspot This sets the width of the cone.
Penumbra Angle / Cone Feather This sets the width of the transition for maximum intensity to 0 intensity at the cone edge, as seen on surfaces the light strikes. In general, if this property is positive the softness extends outwards; if this property is negative, the softness extends inwards.
Cone Falloff / Dropoff Some spot lights include properties to control the light decay from the light center to the cone edge. This functions independently of the penumbra angle.
Barn Doors / Square Some spot lights carry the option to turn the circular cone into a rectangular one. This is apparent when the light creates a rectangular light pool on surfaces it strikes.
Note that some terms may have different connotations in different programs. For example, in 3ds Max, Falloff sets the size of the penumbra angle while in After Effects, Radius set the penumbra angle and Falloff controls the light decay.
Shadows are a critical part of lighting. Every real-world light source produces shadows when encountering opaque or semi-opaque objects. As discussed, a shadow is an area that receives no light. In fact, the shadowed area is a three-dimensional volume behind the object opposite the direction the light is traveling. That said, we usually encounter two-dimensional shadows on surfaces as the atmosphere is transparent (left side of Figure 3.15). However, if fog, smoke, haze, or similar participating media (suspended particles that reflect light) is present in the air, you can see the three-dimensional shadow form (middle and right side of Figure 3.15).
The quality of a shadow varies with the type of shadowing light. This is true whether the light is in the real world or in a 3D program. For example, the real-world sun and 3D directional lights produce parallel shadows (Figure 3.16). Note that the parallel quality may be difficult to detect in the real-world due to perspective (as is the case with the middle photo in Figure 3.15). Technically speaking, the sun is an omni-directional light source, sending light rays in all directions out into space; however, only a narrow band of rays reach earth, making them, for practical purposes, parallel.
Real-world spot lights and unfocused light bulbs produce oblique (non-parallel and non-perpendicular) shadows, as do their 3D spot light and point light counterparts (Figure 3.17). Note that the resulting 3D shadows may be hard-edged unless special steps are taken. The hard edge quality is a result of the spot or point light producing light rays from a single point in space. This prevents the overlapping of rays at the shadow edge. To create shadows with soft edges or edges that degrade over distance, you must use a broad light. If you are using ray trace shadows, you can emulate this by increasing the light’s width or radius property.
Alternatively, to create shadows that soften over distance, you can use an area light. Area lights, due to their array-like structure with myriad light rays that overlap, produce soft-edged shadows by default. The edge quality changes based on the size of the area light and its distance from the point being shadowed (Figure 3.18).
When it comes to generating shadows, there are several methods employed by 3D programs:
Depth Map This shadow type renders a depth map from the point-of-view of the shadow-producing light. The distances of objects from the light are encoded as scalar values and appear in grayscale (Figure 3.19). The depth map is used by the renderer to determine what lies in shadow and what does not. Depth map shadows have the advantage of being efficient; however, they have quality limitations as they are dependent on a rendered map with a fixed resolution. They also allow for less control of the shadow edge, where the shadow edge is equally hard or soft along its entire length. Many light types are able to produce depth map shadows as the shadows are perhaps the most common shadowing method provided by 3D programs. Depth maps are also referred to as depth buffers or Z-buffers. Depth map shadows generally have two main properties: resolution, which sets the depth map bitmap size, and a second property to filter the shadow edge and control its hardness or softness. The name of the second property has many variations—for example, filter size, sample range, and softness.
Ray Trace This shadow type is supported by ray tracing renderers. Ray tracing traces the path of rays through a camera image plane until they intersect 3D surfaces. The rays reflect off the surfaces, transmit through surfaces, or are absorbed by the surfaces (the ray killed off), based on the properties of the shaders assigned to the surfaces. Hence, ray tracing is able to create accurate reflections and refractions. Ray trace shadows are generated by shooting shadow rays from the point of a surface intersection to a shadow-casting light. If the shadow ray encounters a surface on the way to the light, then the original surface point is known to be within a shadow (Figure 3.20). Ray trace shadows are more computationally expensive than depth map shadows, but are generally more accurate. In addition, they are not dependent on an arbitrary pixel resolution. Note that ray trace renderers work in a direction opposite the real-world. In the real-world, photons are generated by a light, reflect off surfaces, and eventually reach the camera sensor or viewer’s eye. Nevertheless, the backwards ray tracing method is mathematically efficient as it does not waste energy on light rays that reflect off surfaces and never reach the camera due to their unique vectors. For an example of 3D ray trace shadows, see Figures 3.16 and 3.17 in the previous section. Ray trace shadow functions generally include properties to control the virtual width or radius of the shadow-casting light (sometimes called radius, angle, spread, or soft size) and overall quality (sometimes called shadows rays, samples, or quality).
PBR (Physically Based Rendering) I’ll use this category to describe advanced lighting and rendering systems that take into account light bounce and color bleed. These systems produce shadows automatically as light rays and/or virtual photons are traced through a scene, accounting for reflections, transmissions, and absorptions when encountering surfaces. As such, specific shadow-casting lights are not required (although the systems also support depth map and ray trace shadows). However, if photon-tracing is necessary, 3D lights that can generate photons are required. These rendering systems are discussed in more detail in Chapter 5.
Renderer Variations 3D renderers often offer specific variations of common shadow types with additional controls to add more flexibility, efficiency, or render quality. For example, Solid Angle Arnold provides extra light controls that define shadow quality. The host 3D programs include documentation for using these variations.
“Shadow owes its birth to light.”—John Gay
In addition to spot, point, ambient, directional, and area lights, other specialized 3D lights may be provided by a 3D program. A few of these are discussed here:
Mesh Some 3D renderers are able to convert user-defined geometry into a light source. This may be useful for emulating a frosted light bulb, translucent lamp sconce, a fluorescent bulb formed into a ring, or a complex neon tube (Figure 3.21). In this situation, light rays are shot out from the geometry surface at perpendicular angles (along the surface normals).
Cylindrical This light type takes on the form of its namesake. It’s related to a mesh light in that it acts like an array of directional lights aligned to the cylinder surface normals. You might use this light to emulate a straight fluorescent bulb.
Environment This light type is presented as a 180 or 360 degree sphere. The sphere acts as an array of directional lights that point inward. You can use the light to emulate a daytime sky. The light may be considered to exist infinitely far away or, if the option is present, can be used with specific scale and translation within the 3D scene. If you are employing an advanced lighting or rendering system, you can base the light intensity and color on a bitmap texture that is mapped to the sphere. Environment lights may be included with the standard light set or may be provided by the renderer as a special shader (Figure 3.22). 180 degree environment lights are also known as hemi, dome, skydome, or skylight lights. This light type is demonstrated in Chapter 5.
Photometric This light type is designed to be physically accurate by using data from real-world lights (Figure 3.23). Photometric lights reproduce light intensity over distance and light spread, which is the light pattern a specific light bulb makes when housed in a particular enclosure (e.g. wall sconce, flashlight housing, lamp housing, and so on). Photometric lights receive their data from imported IES (Illuminating Engineering Society) profile text files. The files are made available to the public by light bulb and light housing manufacturers. Photometric lights are useful when recreating real-world locations or as part of an architectural design process.
Volume These lights are similar to environment lights in that they illuminate any surface within their volume shape (Figure 3.24). The most common volume shape is a sphere. Volume lights offer an alternative to using light decay to control where the light’s illumination reaches within a scene. Note that volume lights and volumetric lighting are not strictly the same. Volumetric lighting allows light to be seen within a 3D volume. You can create volumetric lighting by activating light log, environmental fog, or environmental light scattering, which simulates participating media. These options may be available through a particular light type, such as a spot light, with a specialized shader, or through a renderer.
Renderer Variations Some renderers offer their own sets of lights. Although the lights often take the form of common light types (area lights, spot lights, and so on), they are designed to create more accurate lighting results. Some of these light types support PBR techniques. PBR is discussed in detail in Chapter 5.
3D lighting is dependent on 3D surfaces. The surfaces are provided by 3D geometry that has been assigned to shaders. Shaders mathematically define surface qualities, which include the surface’s color and whether the surface is matte-like, glossy, reflective, refractive, smooth, and/or bumpy (Figure 3.25). When lighting, it’s important to understand how the assigned shaders are functioning. In fact, it may be necessary to adjust shader properties to produce the best lighting results.
Any given 3D program provides a number of different shaders, each with different strengths, weaknesses, and different sets of properties. Brief descriptions of common shaders follow. Note that shaders are sometimes referred to as materials.
Lambert This shader type is simple, offering a diffuse color property (left side of Figure 3.26). Lambert shaders are designed to mimic matte surfaces, where light is scattered diffusely—that is, in a random fashion due to tiny surface imperfections. Real-world Lambertian surfaces include paper and cardboard. Note that Lambert shaders are unable to support reflectivity due to the lack of specularity. Lambert shaders are named after Johann Heinrich Lambert (1728–1777), who studied light diffusion. Oren-Nayer shaders are also designed for diffuse surfaces but increase the resulting accuracy and are ideal for creating powdery surfaces.
Phong / Binn Phong shaders combine several basic surface properties, including diffuse color and specularity (middle of Figure 3.26). The shader creates specular highlights, which appear as bright “hot spots” that mimic the intense reflections of bright light sources. Ray tracing is not required to create the specular highlights, as they are only an approximation of specular reflectivity. Blinn (or Blinn-Phong) shaders build upon the Phong model with increased mathematical efficiency and their own variation of specular highlight controls (right side of Figure 3.26). Phong and Blinn shaders can emulate a wide range of real-world materials; however, they are not physically accurate. Phong shaders are named after Bùi Tuòng Phong and Blinn shaders are named after Jim Blinn, both 3D researchers and programmers.
Cook-Torrance This shader type increases the realism of specular highlights by adding a Roughness property that represents microfacets on a surface. Microfacets are tiny surface imperfections. If microfacets are aligned, a surface appears shiny. If the microfacets are randomly oriented, the surface appears diffuse and matte-like. The shader type is named after Robert L. Cook and Kenneth E. Torrance.
Fresnel This shader type employs more accurate reflections by taking into account the Fresnel effect. The effect occurs in the real world, whereby the strength of a reflection is dependent on the viewing angle (Figure 3.27). The effect is particularly evident with transparent surfaces. If the viewing angle is large, the reflection is strong. You can see this phenomena when looking across a lake (Figure 3.28). If the viewing angle is small, the reflection is weak. You can see this phenomena when looking down at the edge of the same lake. Where the reflection is weak, you can see the lake bottom due to light transmission and refraction through the water (see the next section for information on refractive indexes). Note that a reflected light ray shares the same angle between the surface normal as does the incoming, incident light ray (Figure 3.29). Hence, when reflections occur on smooth surfaces such as mirrors or calm water, the reflections are not distorted and do not change in scale. See Figure 3.34 in the next section for an example of a 3D Fresnel render.
A shader that supports Fresnel reflections is not necessarily named Fresnel. However, a shader that supports such reflections will carry several properties that determine if Fresnel reflections are off or on and what the reflectance strength is when the viewing angle is perpendicular to the surface normal (90 degree) and/or parallel to the surface normal (0 degrees). Fresnel shaders are useful for emulating transparent surfaces such as glass or water. This shader type is named after physicist Augustin-Jean Fresnel (1788–1827), whose mathematical work on light has become known as Fresnel equations.
Ward / BRDF A Ward shader uses BRDF (Bidirectional Reflectance Distribution Function) to determine how much light energy is reflected off a surface. In the real world, the amount of light energy reflected from a surface is equal to or less than the energy carried by the incident light ray (this is known as the conservation of energy). A Ward shader generally includes a Slope property that emulates microfacets and determines if the surface appears matte-like or shiny. The Slope property allows for extremely small and sharp-edged specular highlights. Note that BRDF creates its own form of view-dependent reflectivity, much like a Fresnel shader. BRDF and Fresnel properties may be included within the same shader, although they are generally not used together. The Ward shader is named after Gregory J. Ward. See Figure 3.34 in the next section for an example of a BRDF 3D render.
Anisotropic When the Ward shading method is included in a complex shader, it’s often the isotropic variation. As such, the rotation of the surface in reference to the camera does not change the shape of the specular highlight. However, when the Ward anisotropic variation is employed, the surface rotation changes the highlight shape. This shader type adds X and Y properties to control the orientation of the highlight. Anisotropic shaders can produce elongated highlights and are suitable for surfaces such as brushed metal, grooved plastic, hair, feathers, and rippled water (Figure 3.30).
SSS (Sub-Surface Scattering) SSS shaders create translucence by diffusely scattering light through a surface. In contrast, transparency allows light to pass through a surface without scattering (although there is a change in light speed, which is discussed in the next section).You can see translucence when you place a light behind skin, soap, wax, plastic, and paper (Figure 3.31). SSS shaders replace the diffuse color component of a render with averaged surface values from a light map. A light map is a special bitmap texture that stores surface brightness information.
Monolithic and Layered Monolithic shaders combine a set of basic shader algorithms within a single shader structure. For example, the Arnold Ai Standard Surface shader utilizes Cook-Torrance, Ward anisotropic, and SSS shader functions (among others). The goal of a monolithic shader is to provide the maximum amount of control to produce physically-accurate shading results. A layered shader, such as the NVIDIA mental ray MILA, is monolithic but allows the user to pick and choose which shader functions are available; for example, the user can choose to ignore refractivity or select between direct and indirect (bounced) diffuse lighting contributions. Note that some 3D programs allow you to choose different shaders to define the diffuse color and specular highlight. For example, in Blender you can choose an Oren-Nayer, Fresnel, or Lambert shader to define the diffuse color and a Blinn, Phong, Cook-Torrance, or Ward isotropic shader to define the specular highlight.
Despite the wide array of available shaders, many shaders share common properties. These are described here.
Diffuse Color Sets the base color of the surface. The base color is the color of the surface without reflections under white light. You can map this property with a bitmap texture to create color variations across the surface. The property controlling diffuse color is often named Color. However, some shaders carry a separate Diffuse property that controls the degree of diffuse light scattering, where some scattered light does not return to the camera or viewer. The lower the Diffuse value, the more light is randomly reflected away from the camera and the darker the surface appears. The higher the Diffuse value, the less light is randomly reflected away from the camera and the brighter the surface appears. Diffuse color is referred to as albedo with PBR systems; albedo does not include lighting information.
Ambient Color Determines the color of the surface when lit by ambient light but no direct light. In other words, this is the color of the surface that is in shadow. This property is usually black by default, which equates to ambient light with 0 intensity. However, you can change the color to non-black color to emulate ambient light arriving from all points in a scene.
Specularity / Glossiness / Metal Shaders that support specularity provide a set of specular properties that control the size, intensity, and color of the specular highlights or specular reflections. Alternatively, the shader may provide a glossiness set of properties. Glossiness defines the blurriness or sharpness of reflections based on the smoothness or roughness of the reflecting surface (Figure 3.32). Glossiness properties provide more realism than specular properties but are dependent on ray tracing. In addition, some shaders provide a Metal or Metalness property that uses the diffuse color or a special metal color map to tint reflections—this simulates reflective qualities common to metallic surfaces.
Transparency Controls the opacity of the surface. In general, a value of 1.0 or a white color is 100 percent transparent and a value of 0 or a black color is 100 percent opaque. Transparency is necessary for transmission and refractivity but does not activate those functions.
Reflectivity This property sets the degree to which the surface reflects light. A high reflectivity value creates a bright reflection. For this property to function, the scene must be ray traced by the renderer. Note that PBR systems may use the term diffuse reflectivity to refer to scattered, refracted light, some of which re-emerges from the surface. As such, specular reflectivity refers to all reflections occurring when light reflects off a surface but does not penetrate it, whether the reflection is glossy or non-glossy.
Refractivity Controls light transmission for transparent and semi-transparent surfaces. The property, which is sometimes called refractions, is usually combined with a Refractive Index or IOR (Index Of Refraction) property. As discussed in Chapter 1, a refractive index is a numerical value that represents the change in speed of a light ray as the ray crosses an interface (boundary) between two materials (Figure 3.33). The materials may be air and water, air and glass, and so on. The change in light speed creates the illusion that the objects behind the transparent surface are distorted (Figures 3.34 and 3.35). The refractive index of a vacuum is 1.0, which indicates no change in speed. Air is close to 1.0 and is often rounded down to 1.0. Water has an index of 1.33 and glass has an index that varies between 1.2 and 1.4. Ray tracing is required for refractivity. For both refractivity and reflectivity, there are renderer properties to control the number of times that light rays are permitted to reflect off and/or transmit/refract through surfaces. Note that a refractive index property is equally useful for opaque surfaces that use Fresnel reflections. In this case, the refractive index value affects the percentage of light energy reflected back toward the camera or viewer.
Incandescence / Emissive Treats the surface as a light source. The illumination may be used to light other geometry if the function is supported by the renderer. For example, Iray, Chaos Group V-Ray, and Arnold renderers can use an emissive shader to light a scene (see Chapter 6).
Translucence Approximates sub-surface scattering but is not as computationally expensive as a SSS shader. In general, there are several properties that control the strength, focus, and virtual depth of the scattered light. A separate but related property is Backlighting, which assumes light has traveled through a surface from a side unseen by the camera to the side facing the camera. Backlighting creates the appearance of translucence but does not include additional controls to adjust the result.
Bump Mapping / Normal Mapping Create the illusion that a surface is rough or bumpy by perturbing surface normals at the point of render (Figure 3.36). Bump mapping uses a scalar (grayscale) bitmap texture to determine the locations of peaks and valleys of the bump. Normal mapping uses a special normal map texture to do the same. (Normal maps are often generated by comparing a high-resolution model and low-resolution model and encoding the difference of surface vertex positions as vector values in RGB.) For more information on texturing, see the sidebar at the end of this chapter.
Displacement Mapping / Parallax Mapping The goal of displacement mapping is similar to bump of normal mapping. However, displacement mapping distorts the surface at the time of render (Figure 3.37). Whereas a bump or normal map cannot affect the silhouette edge of an object, displacement mapping can change the shape of an object to a great degree. Displacement mapping uses a grayscale displacement or height map to determine how far to push or pull surface vertices. Parallax mapping, also known as offset mapping or virtual displacement mapping, is a more advanced form of bump and normal mapping that approximates the effect of displacement mapping without the cost of distorting geometry.
Note that some properties of shaders are intended to create special render passes to be used in 2D compositing programs. Render passes are discussed in Chapter 5.
Textures are an important part of the 3D process. Textures, which are procedurally generated by the 3D software or imported as a bitmap, add color variation and detail to otherwise solid-color shaders. In addition, you can link textures to a wide variety of shader properties to lend even more complexity to a surface. Texturing, as a phrase, refers to the process of assigning shaders to geometry and adding texture maps to the shaders. Bitmap and map refer to a 2D digital image. Mapping refers to the process of linking maps to shader properties.
Many of the 3D models illustrated in this book have texture maps assigned to the color properties of the assigned shaders. With the exception of bump maps, no other properties have been mapped. Although this limits the realism of the renders, it makes it easier to share the exercise files across a wide variety of 3D programs.
“Look beneath the surface let not the several quality of a thing nor its worth escape thee.”—Marcus Aurelius Antoninus