7   

DIGITAL ELEMENT CREATION

DIGITAL HAIR AND FUR

Armin Bruderlin, Francois Chardavoine

Digital humans, animals, and imaginary creatures are increasingly being incorporated into motion pictures. To make them believable, many of these characters need persuasive hair or fur. Because digital characters can appear in both live-action and computer-animated movies, a wide variety of looks are possible, ranging from photorealistic to cartoony. In this section, some insight is given into what a digital hair/fur system should accomplish, how it is used in production, and what some of the issues and challenges are in modeling, animating, and rendering hair and fur.

Generating convincing digital hair/fur requires dedicated solutions to a number of problems. First, it is not feasible to independently model and animate all of the huge number of hair fibers. Real humans have between 100,000 and 150,000 hair strands, and a full fur coat on an animal can have millions of individual strands, often consisting of a dense undercoat and a layer of longer hair called guard hair. The solution in computer graphics is to only model and animate a relatively small number of control hairs (often also called guide or key hairs), and interpolate the final hair strands from these control hairs. The actual number of control hairs varies slightly depending on the system used but typically amounts to hundreds for a human head or thousands for a fully furred creature. Figure e7.1 shows the hair of two characters from The Polar Express (2004), a girl with approximately 120,000 hair strands and a wolf with more than 2.1 million hairs.

A second problem is that real hair interacts with light in many intricate ways. Special hair shaders or rendering methods utilizing opacity maps, deep shadow maps, or even ray tracing are therefore necessary to account for effects such as reflection, opacity, self-shadowing, or radiosity. Visual aliasing can also be an issue due to the thin structure of the hair.1

image

Figure e7.1 Examples of digital animal fur and human hair. (Image © 2004 Sony Pictures Imageworks Inc. All rights reserved.)

Another problem that can arise is the intersection of hairs with other objects, usually between hairs and the underlying surface, such as clothes or the skin. For practical purposes, hair/hair intersections between neighboring hairs are usually ignored, because they do not result in visual artifacts.

In almost all cases other than for very short animal fur, hair is not static during a shot but moves and breaks up as a result of the motion of the underlying skin and muscles, as well as due to external influences, such as wind and water. Often, dynamic simulation techniques are applied to the control hairs in order to obtain a realistic and natural motion of the hair. In the case of fully furred creatures with more than 1000 control hairs, this approach can be computationally expensive, especially if collisions are also handled as an integral part of the simulation.

Finally, for short hair or fur, some special, purely rendering techniques have been introduced that create the illusion of hair while sidestepping the definition and calculation of explicit control and final hair geometry. These techniques address the anisotropic2 surface characteristics of a fur coat and can result in convincing medium to distant short hair or fur shots but can lack close-up hair detail and don’t provide a means for animating hairs.3

The next subsection introduces a generic hair/fur pipeline and explains how a specific hairstyle for a character is typically achieved during look development and then applied during shots.

Hair Generation Process

Figure e7.2 shows a diagram of a basic hair generation pipeline. The input is the geometry of a static or animated character, and the output is the rendered hair for that character. The hair or fur is generated in a series of stages, from combing (modeling) and animating of the control hairs to interpolating (generating) and rendering of the final hairs or fur coat.

image

Figure e7.2 Pipeline diagram. (Image © 2009 Sony Pictures Imageworks Inc. All rights reserved.)

In a production environment, this process is often divided into look development and shot work. During look development, the hair is styled, and geometric and rendering parameters are dialed in to match the appearance the client has in mind as closely as possible. During shots, the main focus is hair animation.

Look Development

The hair team starts out with the static geometry of a character in a reference pose provided by the modeling department. One obvious potential problem to resolve immediately is to make sure that the hair system used can handle the format of the geometric model, which most likely is either of a polygonal mesh type, a subdivision surface, or a set of connected NURBS4 patches.

With the character geometry available, placing and combing control hairs are the next steps. Control hairs are usually modeled as parametric curves such as NURBS curves. Some systems require a control hair on every vertex of the mesh defining the character, whereas others allow the user to freely place them anywhere on the character. The latter can reduce the overall number required since it makes the number of control hairs independent of the model: Larger numbers of hairs need only be placed in areas of the body where detailed control and shaping are needed (such as the face). Keeping the number of control hairs low makes the whole hair generation process more efficient and interactive, because fewer hairs need to be combed and used in the calculations of the final hairs. Figure e7.3 illustrates a set of uncombed and combed control hairs from Stuart Little (1999). One such control hair with four control points is enlarged in the lower right corners.

The number of control points defining a control hair dictates how much detail is possible in the shape of the hair. The minimum number of control points is four for a cubic curve, and that is sometimes sufficient for short animal fur. Longer or wavy hair often requires more control points, and for complex shapes, such as braids, it can be as high as 50. However, similar to the number of control hairs, the number of control points per hair also has an effect on performance.

image

Figure e7.3 Left: Uncombed/combed control hairs. Right: Rendered final hairs. (Image © 1999 Sony Pictures Imageworks Inc. All Rights Reserved.)

When control hairs are initially placed on a character, they point in a default direction, usually along the normal of the surface at the hair follicle locations. Combing is the process of shaping the control hairs to match a desired hairstyle. This can be one of the most time-consuming stages in the whole hair generation process. Having a set of powerful combing tools is essential here.5 For short hair or fur, painting tools are often utilized to interactively brush certain hair shape parameters, such as hair direction or bend over the character’s surface.

Interpolation is the process of generating the final hairs from the control hairs. One or more control hairs influence each final hair. The exact algorithm of interpolation depends on the hair system used, but the most popular approaches are barycentric coordinates, where the shape of a final hair is determined by its surrounding control hairs (requiring a triangulation of those control hairs or one per vertex of the mesh) or weighted falloff based on the distance of a final hair to its closest control hairs. The latter is illustrated in Figure e7.4, where the final hairs are in orange, the circles show the falloff regions of each control hair, and the blue control hair is moved.

The interpolation step requires an overall density value to be specified that translates into the actual number of final hairs generated. Frequently, feature maps (as textures) are also painted over the body to more precisely specify what the hair will look like on various parts of the body. For example, a density map can provide fine control over how much hair is placed in different parts of the body. In the extreme case, where the density map is black, no final hairs will appear; where it is white, the full user-defined density value applies. Maps can be applied to any other hair parameter as well, such as length or width.

It is also helpful during the combing process to see the final hairs at the same time when combing the control hairs, and most systems provide a lower quality render of the final hairs at interactive rates. This is shown in Figure e7.5, an early comb of Doc Ock in Spider-Man 2 (2004), where the control hairs are in green, and only 15% of the final hairs are drawn to maintain real-time combing.

The hair/fur is then rendered for client feedback or approval, usually on a turntable, with a traditional three-point lighting setup to bring out realistic shadows and depth. During this phase, a Lighter typically tweaks many of the parameters of a dedicated hair shader, like color, opacity, etc., until the hair looks right.

image

Figure e7.4 Final hairs interpolated from control hairs. (Image © 2007 Sony Pictures Imageworks Inc. All rights reserved.)

image

Figure e7.5 Control and final hairs during combing. (Image © 2004 Sony Pictures Imageworks Inc. All rights reserved.)

Look development is an iterative process in which the hairstyle and appearance are refined step by step until they are satisfactory and match what the client wants. Once approved, shot work can start and a whole new set of challenges arises with respect to hair when the character is animated.

Shot Work

Animators don’t usually directly interact with the hair. When they start working on a shot that includes a furry or haired character, it doesn’t have any hair yet. It is therefore helpful to have some sort of visual reference as to where the hair would be to help them animate as if the hair were present. This can be in the form of a general outline of how far out the hair would be compared to the body surface, such as a transparent hair volume, or it might be a fast preview of what the final hair look will be. This is important because it limits the number of corrections that need to be done once hair is applied. Figure e7.6 shows an example that illustrates hair volume using Boog from Open Season (2006).

image

Figure e7.6 Boog with hair volume. (Image © 2006 Sony Pictures Animation Inc. All rights reserved.)

As described earlier, the hair appearance is decided in the look development phase. This includes where all the hairs are on the body and their shape and orientation. When an animator creates a performance for a character in a shot, the hair follows along with the body and remains stuck to the body surface in the same positions it had in the reference pose. This provides what is known as static hair; even though the hair follows along with the body, the hair itself is not animated. If all that is needed is motionless hair on an animated body, then hair work for the shot is done.

Usually, hair motion and dynamics are an important part of making a shot look realistic and can contribute to the character’s performance. Hair motion can be:

•   Caused by the character’s animation: If the character folds an arm, the hair should squash together instead of going through the arm.

•   Caused by interaction with other objects: If the character is holding something, walking on the ground, or hitting an object, in all cases the hair should be properly pushed away by the object instead of going through it.

•   Full dynamics simulation: Short hair may have a bouncy motion (and take a few frames to settle), whereas long hair will flow with the movement and trail behind.

These situations are usually handled by applying some sort of dynamics solver to the control hairs, so that they react in a physically realistic way, while moving through space or colliding with objects. The final rendered hairs then inherit all of the motion applied to the control hairs.

Third-party hair software packages provide dynamics systems, but some facilities also have proprietary solvers. There are many published methods for simulating hair dynamics (superhelix, articulated rigid-body chains), but the most widely used are mass spring systems, much like in cloth simulations. Whereas these methods provide solutions for physically correct motion, animating hair for motion pictures also has a creative or artistic component: The hair needs to move according to the vision of the movie director. Therefore, the results of dynamic hair simulations are often blended with other solutions (hand animated hair, hair rigs, static hair) to achieve the final motion.

General Issues and Solutions

Combing

As mentioned earlier, combing can be an elaborate task during look development, especially for long human hair and complex hairstyles. It is therefore crucial for a hair artist to have access to a set of flexible combing tools. Even for a commercially available system like Maya Hair, additional custom tools can be developed to facilitate the shaping of control hairs. For example, to make it easy to create curls or form clumps between neighboring hairs as shown in Figure e7.7, left, the user selected a group of control hairs and then precisely shaped the desired clump profile to specify how those hairs should bunch together.

image

Figure e7.7 Left: Clumping tool with a user-customizable profile. Right: Combing braids (from left to right: three braided cylinders, control hairs generated by tool, final rendered hairs). (Image © 2007 Sony Pictures Imageworks Inc. All rights reserved.)

A fill volume tool can quickly fill an enclosed surface with randomly placed control hairs. The hair volumes created by modelers to describe the rough hair look of a character can be used to actually generate the control hairs needed by the hair artist. An example is shown in Figure e7.7, right, where braids were simply modeled as cylindrical surfaces by a modeler and then filled with hair. Figure e7.8 illustrates intricate the human hairstyles of two characters from Beowulf (2007).

Render Times and Optimizations

One difference between short hair/animal fur and long humanlike hair is the difference in combing needs. Another is render times and memory requirements. Whereas a frame of high-quality human head hair can be rendered from minutes to under an hour depending on hair length and image resolution, rendering the millions of individual hair strands of fully furred creatures can take several hours per frame using several gigabytes of computer memory, especially with motion blur and when close up to the camera.

image

Figure e7.8 Final comb examples, illustrating the intricate human hairstyles of two digital characters from Beowulf (2007). (Image courtesy of Sony Pictures Imageworks Inc. and Paramount Pictures. BEOWULF © Shangri-La Entertainment, LLC, and Paramount Pictures Corporation. Licensed by Warner Bros. Entertainment Inc. All rights reserved.)

Optimizations applied at render time can therefore be very effective. One example is view frustum culling, in which hair outside the current view is not generated and rendered, which is very useful if most of the character is off screen. If the hair follicle location at the base of the hair on the skin is used to decide whether to cull the hair or not, a safety margin needs to be added, because (long) hair with follicles outside the view frustum may still have their tips visible in the frame. The same holds true for backface culling methods, which do not render hair on parts of the character facing away from the camera.

Another optimization technique is level of detail (LOD) applied to the hair, where different preset densities of hairs are generated and rendered for a character depending on the distance from camera. Varying hair opacity to fade out hair strands as they move between levels helps to avoid flickering. Some studios have also developed more continuous techniques, which smoothly cull individual hair strands based on their size and speed. An example is given in Figure e7.9 from Surf’s Up (2007), where Cody moves fast from back to front, and left to right with motion blur during a shot. The normal hair count for Cody was 1.5 million hairs, which took about 21 minutes in the top frame to render and 48 minutes in the bottom frame. With optimization turned on, the same two frames only took 2.5 and 7.5 minutes to render, respectively, at half the memory (about 500 MB).

A feature of many hair systems is the ability to have multiple hair layers. Examples are an undercoat of dense, short hair and an overcoat of longer sparse hair for an animal fur coat or more general base, stray, or fuzz layers. Each layer can have its own set of control hairs or share them but apply different stylistic parameters. Using layers can ease and speed up the development of a complex hair or fur style by breaking it up into simpler components, which are then combined at render time.

image

Figure e7.9 Two frames rendered with optimization from SURF’s UP (2007). (Image © 2007 Sony Pictures Animation Inc. All rights reserved.)

Procedural Hair Effects

After or during the interpolation step (in Figure e7.2), when the shapes of the final hairs are calculated from the control hairs, special procedural techniques can be applied so that each final hair looks unique in one way or another. Examples are a wave effect to apply possibly random waviness to the strands, a wind effect to blow wind through the hairs, or clumping, where neighboring final hairs are grouped together in clusters. Feature maps can also be painted on the model to vary these effects over the body. The power of these effects lies in the fact that they are independent of the control hairs and provide a finer granularity than the control hairs ever could. An example is shown in Figure e7.10 for a wave and wind effect, which changes the final hairs without changing the control hairs (shown in green).

image

Figure e7.10 Top: No effect. Middle: Wave. Bottom: Wind. (Image © 2007 Sony Pictures Imageworks Inc. All rights reserved.)

Time and Cost: What the Moviemaker Needs to Know

Generating digital hair is both complex and time consuming. It is important to understand what aspects of hair combing and animation can potentially drive up the cost of a character or a shot. The level of quality needed for a project will greatly influence the cost: It is quite easy to get 85% of the way to the desired look, but the last 15% is likely to take exponentially longer.

What Makes a Hairstyle Expensive?

Depending on the level of detail and refinement that is needed for a particular character, hair look development can take anywhere from a few days to a few months. This includes not only making sure the hair appearance satisfies the client but also anticipating what the interactions and the movement of the hair will be in a shot. The most obvious cost is how much time is spent by the artist getting the hair to look as expected, but external factors often come into play. Here are a few things to keep in mind:

•   Decide what the character should look like before starting to comb CG hair. This should be done through reference photography or artwork and be approved by the client. Hair photos for live-action movies, detailed art sketches, or maquettes with “hair engravings” for CG features are all good candidates.

image

Figure e7.11 Stepwise refinement of Boog before hair look development from OPEN SEASON (2006). (Image © 2006 Sony Pictures Animation Inc. All rights reserved.)

•   Some iterations will be needed once the digital hair of a character is seen for the first time, but these should only be for corrective tweaks, not major artistic direction. Going back and forth over sketches or photos beforehand will be orders of magnitude cheaper and faster than combing and rendering fully furred characters, only to see that work thrown away. This is why creating photoreal digital doubles of actors is often very fast, regardless of the complexity of the hairstyle, because a strict reference must be matched and very little flexibility given to supervisors or clients to stray and experiment during the combing stage.

•   Make sure that things are changing as little as possible beneath the hair artist’s feet. This includes modifying the model that the hair is grown on (which may cause parts of the hair to be recombed or feature textures to be redone).

image

Figure e7.12 Left: Reference photography. Right: Digital double. (Image © 2007 Sony Pictures Imageworks Inc. All rights reserved.)

•   Regarding the model, it is critical to properly account for hair volume when modeling fully furred creatures. Early in production on Stuart Little (1999) the artists had little experience with how the fur coat would change the appearance of the character. It took several cycles back and forth between the hair team and the modeling department and eventually resulted in a considerably skinnier model, which had about the same volumetric appearance with the fur as the initial model without fur. If the extra volume a fur layer adds can be anticipated during modeling, money is saved, because every time the model changes, the hair is potentially affected to the point at which some or most of the work has to be redone. The only consistently reliable solution is to always create an anatomically correct furless model. This may look strange but will produce the desired result when fur is added and will behave much more realistically when in motion.

Be mindful of aspects that will increase the render times, such as the number of hairs and their transparency: The longer the render time, the less time an artist has to address comments in between renders. Try to keep hair counts as low as possible.

What Interactions/Shot Situations Are Problematic (and Drive Up the Cost)?

A hairstyle often consists of millions of final hair strands that an artist can’t control directly because they are all generated procedurally at interpolation time. This means any detailed interaction with the final hairs in a shot can spell trouble. If anything is looking incorrect, the artist can’t go in and manually adjust the final hairs to look right. What may appear to be minor adjustments to a shot can mean days of extra work in the hair department. This can include characters interacting with the hair (a hand touching or stroking fur) or thin objects that move through the hair (like backpack straps or accessories that slide on a furry character’s body, as shown in Figure e7.13).

image

Figure e7.13 G-FORCE (2009) characters equipped with accessories. (Image Courtesy of Sony Pictures Imageworks, Inc. © 2009 Disney Enterprises, Inc. All Rights Reserved)

Transitioning between hair styles within the same shot can also be an issue. Depending on the hair system used, it may not always be easy or possible to blend from one hair look to another if they are very different. A typical example is going from dry to wet hair, or the opposite, which can happen when a creature gets splashed, falls into water, and shakes itself to dry off. Figure e7.14 shows a digital beaver from The Chronicles of Narnia: The Lion, the Witch and the Wardrobe (2005).

In some of these problematic cases, one can get away with imperfections simply because the character is moving fast and motion blur will hide things. It is critical to be aware of these situations from the start, at the look development phase: If one knows ahead of time which situations a character will encounter, a hair artist can better strategize how to comb a character. For instance, more control hairs might be placed in areas that require detailed interactions. Control hairs can always be added later for a specific shot, but they may change the overall look of the character approved by the client, requiring extra time to be spent just getting back the original look.

image

Figure e7.14 Wet beaver transitioning to dry. (Image from THE CHRONICLES OF NARNIA: THE LION, THE WITCH AND THE WARDROBE. © 2005 Disney Enterprises, Inc., and Walden Media, LLC. All rights reserved.)

Summary

The process and pipeline for generating digital hair for a motion picture have been explained in this section and some of the issues that can arise and possible solutions for them.

Several commercial hair software packages are available, including Softimage XSI Hair and Maya Hair6 or Shave and a Haircut.7 Many of the effects and animation studios also have their own proprietary hair pipelines (see Bruderlin (2004) for an example), which makes it possible for these companies to quickly change or add to the functionality of their systems when a new movie requires it.

For a general and technical survey paper on various approaches and problems related to digital hair modeling, animation, and rendering, see the article by Ward, et al. (2007).

For more technical detail on issues and solutions to shading hair to make it look like real hair, see the articles by Kajiya and Kay (1989) and Marschner, et al. (2003). Examples of generating hair without explicit geometry include Jim Kajiya’s volumetric texture maps called texels (Kajiya and Kay), Dan Goldman’s fake fur probabilistic lighting model (Goldman, 1997), and Ken Perlin’s procedural texture approach referred to as hypertextures (Perlin, 1989).

References

Bruderlin, A. (2004). Production hair/fur pipeline at imageworks, in photorealistic hair modeling, animation, and rendering, ACM SIGGRAPH Course No. 9. Computer Graphics.

Goldman, D. B. (1997). Fake fur rendering, ACM SIGGRAPH Proceedings. Computer Graphics, 127–134.

Kajiya, J. T., & Kay, T. L. (1989). Rendering fur with three dimensional textures, ACM SIGGRAPH Proceedings. Computer Graphics, 23(3), 271–280.

Marschner, S. R., Jenson, H. W., Cammarano, M., Worley, S., & Hanrahan, P. (2003). Light scattering from human hair fiber. ACM Transactions on Graphics (TOG), 22(3), 780–791.

Perlin, K. (1989). Hypertextures (ACM SIGGRAPH Proceedings). Computer Graphics, 23(3), 253–262.

Ward, K., Bertails, F, Kim, T., Marschner, S. R., Cani, M., & Lin, M. C. (2007). A survey on hair modeling: styling, simulation, and rendering. IEEE Transactions on Visualization and Computer Graphics, 13(2), 213–234.

1 This is especially true in the case when applying regular shadow maps instead of deep shadow maps and when using ray-trace renderers.

2 An anisotropic surface changes in appearance as it is rotated about its geometric normal. An example is brushed aluminum.

3 Please see items Goldman (1997), Kajiya and Kay (1989) and Perlin (1989) in the References area at the end of this section for more detail.

4 Non-Uniform Rational B-Spline.

5 For examples and further discussion, see the General Issues and Solutions subsection below.

6 www.autodesk.com.

7 www.joealter.com.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset