CHAPTER

11   Graphics, Animation, and Special Effects

•  What are the aesthetic choices of graphics?

•  What are the principles of graphics?

•  How is animation defined?

•  What are the types of animation?

•  What are graphic applications in digital productions?

•  How do digital effects differ from optical or physical effects?

Introduction

Graphic functions can be divided into two categories: digitally created and physically created. In the studio, this includes the creation of scenery, props, and backgrounds. Graphics focus on the arrangement of letters, symbols, and visuals within the frame and the creation of complete backgrounds or virtual settings. All of the aspects of graphic design must be coordinated with one another to affect a consistent and unified approach to all elements that appear within the frame.

Graphic design should establish the time, place, and mood; reflect character; and reinforce specific themes. A historical time period and setting must be easily identifiable. Titles may denote a specific time and place at the same time that they reflect a specific style or mood. The mood or atmosphere results primarily from the abstract, emotional aspects of design elements and principles. Specific colors and shapes create an emotional mood that can reveal character and reinforce themes. The idea that you can tell a great deal about people from where they live and what they wear can be applied to scenic design. Cold, formal graphics reveal a great deal about a character, as does a warm, relaxed graphic. The opening titles warn the audience of the mood, genre, and often the time and location of the production.

Animation and special effects generate visual interest and can be used to create imaginative worlds that defy the physical laws of space and time. Animation simulates movement, allowing objects and characters to inhabit a unique world and to perform or record unbelievable actions that would be impossible in real life. Special effects generate interest and excitement, often allowing futuristic or historical worlds to come to life, dangerous actions and events to be simulated, and live-action characters to accomplish superhuman feats. Digital animation techniques now replace many physical special effects to create realistic-appearing scenes in film and video productions that could not be accomplished in any other manner. The same techniques allow corrections to be made in postproduction to save time and the extra expenses of having to reshoot mistakes made in original shots or to remove unneeded objects. Animation on the World Wide Web (WWW) has grown exponentially in the past decade.

AESTHETICS OF GRAPHICS AND ANIMATION

Realist Graphics

A realist graphic gives the appearance of an existing object, location, character, or background. Some three-dimensional animation used in commercials and settings cannot be distinguished from the real object by a casual viewing on whatever media it is recorded. The illusion of reality, not actual reality, is the critical point of a realist graphic. Virtual characters, known as avatars, duplicate as closely as possible the shape, color, movement of a human. If the avatar suddenly becomes something other than its original shape, that design might now move to become a modernist or postmodernist graphic because its depiction is far removed from any form of a realistic figure or object. Classifying graphics and any media art form within tight definitions of realism, modernism, or postmodernism may be difficult simply because an art form may easily transform or change its shape and depiction as part of its character or purpose within the plot. Both the original form and the final are classifiable before it changes shape and when it reaches its final shape. Realist graphics are rarely defined by their supposed fidelity to nature or reality alone. Almost every realist graphic has an emotional impact as some degree of subjective stylization. A realistic setting, title, or animation should convey a psychological impression that reinforces the central theme of a drama or the central message of an informational program. It can reflect warmth or coldness, tension or relaxation, simply by virtue of the colors, lines, and shapes it presents. It is even possible for a realistic setting to reveal a specific character’s emotional state through the feelings that the design conveys.

Modernist Graphics

Modernist graphics are much more abstract than realist designs. The subjective feelings they arouse and the subjective impressions they convey are rarely tied to actual experience or production efficiency alone. Modernist artists usually have much freer rein to explore specific design elements or subjective impressions for their own sake. A designer may decide to call attention to textures, shapes, lines, and colors themselves. Visual innovations often stem from such formative experiments, which can be incorporated into more conventional narrative, documentary, or instructional programs. Experimental productions by many computer and animation artists have shown how a formative or modernist approach to graphics can destroy any sense of reality by ignoring spatial perspective and using highly artificial, stylized designs and formats.

Postmodernist Graphics

Postmodernist designs leave much of the visual perception to the imagination of the viewer. Graphics, color, and movement can be juxtaposed in a series of apparently unrelated images. Postmodernist designs often mix a variety of design styles drawn from different genres and historical periods. For example, the settings in the film Who Framed Roger Rabbit (1988) suggested 1940s Los Angeles in a semirealist way until the timeless, garish cartoon world of Toontown collided with the live-action world. The production design in Chinatown (1974) limited the color blue to appearances of the main theme, water, in 1930s Los Angeles, whereas pastel colors and art nouveau designs were reminiscent of the 1930s in the 1980s urban setting of the television series Miami Vice. Postmodernist designs sometimes appeal to the emotions and often are difficult to analyze or categorize, just as postmodernist paintings and writings are difficult to place in traditional categories. The distorted shapes, textures, and colors of the objects and characters in Tim Burton’s Corpse Bride illustrates the combination of many different artistic styles, and forms mixing unusual colors, shapes, type styles, and distorting figures, which can serve as the basis for postmodernist designs.

PRINCIPLES OF GRAPHICS

A graphic artist or animator works with three basic principles of design: graphic elements, color, and composition. The ways in which these elements are selected and combined determines the nature and success of the design. The selection of design elements must support the themes, plots, and characterizations of a drama or the central message of a nonfiction production. These principles are the same as those of any designer using artistic tools such as set designers and lighting designers, as described in Chapter 7, Lighting and Design.

Design Elements

The elements a graphic artist uses are line, shape, texture, and perceived movement.

Line

Line defines the form of a graphic design. An independent line can be straight, curved, or spiral. Edges are lines formed by shapes or objects that overlap each other, such as a foreground door and background wall. Lines can be repeated to create parallel lines or concentric circles. They create a path or direction of movement for the eye. Converging parallel lines create an illusion of depth or spatial perspective, for example. Straight lines are more dynamic than curved lines and circles. They create a strong sense of directional movement. Smooth curves and circles communicate a smoother, softer feeling of more gradual movement. Norman McLaren’s drawing directly on film in his Hen Hop as well as drawing on the soundtrack with straight lines for the soundtrack of Blinkity Blank illustrates an extreme use of lines only in a both graphic form and artistic use of line in a soundtrack.

Shape

A combination of lines creates a shape. An infinite number of different shapes reflect specific objects, but some common, recurring shapes with which all graphic artists and others use are circles, squares, rectangles, triangles, ellipses, trapezoids, octagons, and hexagons. Shapes can carry symbolic meaning. They can be repeatedly used in conjunction with specific people or settings to evoke specific themes. In the film industry, almost all classic animators used basic shapes to simplify their work. Three-dimensional computer animation also relies on basic shapes to create all of the charters and objects. Simple shapes as used in the award-winning Pixar films Luxo Jr. and Tin Toy are classic examples of using basic design elements to simplify and yet reinforce specific themes.

Texture

Texture provides a tactile impression of form. Texture can be real or represented. Real textures are revealed by directional light, which creates shadows and modeling on a non-smooth surface. Artists can represent textures, such as granite, marble, or wood grains, on a flat two-dimensional surface by creating a tactile impression. Tactile is the sense of texture, how something feels by touch. The drawing texture of a surface can create a perception of depth. A rough texture with heavy shadows provides a greater sensation of depth than a smooth, flat surface. A graphic of heavily draped or folded material creates a richness that relates to a feeling of opulence, splendor, or decadence. Texture, like shapes, can create a sense of space that affects our emotions and relates symbolically to the major themes of a story. Weathercasters stand in front of a green flat, but the graphic artist with the help of computers creates the appearance of the countryside in depth with the textures of the mountains, water, and cities behind her to show the weather pattern and movement of rain and snow, even though the visual is two-dimensional.

Perceived Movement

Movement can be real or imaginary. The movement of performers on a set indicates real movement, whereas the illusion of movement stimulated by a series of still drawings or stationary backgrounds appears imaginary. In design, imaginary movement is just as important as real movement. The illusion of movement can be enhanced by the use of forced perspective lines drawn on the floor of the studio, for example. Transference can take place between real or imagined movement and otherwise stationary objects. A simple figure placed against a pulsating background will appear to dance or vibrate itself. A moving background can transfer the illusion of movement to a stationary figure placed in front of it. Movement throughout a stationary image is carefully controlled through changes in color, shape, space, and direction that guide the eye through a design. Movement also may be created by placing objects or character higher or lower, or closer or farther way from the camera in the frame which creates a perceived Z-axis in a two-dimensional frame. Hitchcock was exceptionally adept at framing subject in the frame to create movement without actual movement.

Color

The three aspects of color that are of primary importance to a designer are color harmony, color contrast, and the emotional or symbolic (cultural) effect of color.

Color Harmony

Various relationships between color pigments on a two-dimensional color wheel in large part determine the degree to which specific colors will harmonize with each other. A two-dimensional color wheel is a series of different colored chips or samples arranged in a circle from colors that are cool (short wavelengths of light), such as violet, blue, and green, to colors that are warm (long wavelengths of light), such as yellow, red, and orange. Traditional color judgments indicate that colors distant from each other on the wheel harmonize better than close colors, which tend to clash with each other. Several harmonious colors for sets and costumes can be selected by laying an equilateral triangle or square on top of a color wheel using the colors at the points. As the triangle or square is rotated, the group of harmonious colors changes.

Color Contrast

Different colors help separate objects in a scene through their mutual contrast. If two objects or shapes did not contrast with one another, they would appear as one object or shape. Contrast can help us perceive spatial depth. If specific colors of foreground and background are different, we will perceive their separation and hence spatial depth. Adjacent colors tend to interact. If you place a gray object against different colored backgrounds, it will appear darker or lighter depending on the color and brightness of the background. A particular hue takes on a completely different feeling depending on the hues that are adjacent to it. Complementary colors of the same intensity should not be placed next to each other, unless the intense contrast is intentional.

Maintaining brightness and contrast between different lines, shapes, and masses is extremely important when designing graphic images for television and film. A television graphic designer cannot rely exclusively on color contrast, because a television program may be received in either color or black and white. Adjacent colors should have a gray-value brightness contrast of at least 30 percent; that is, each object or shape should be 30 percent brighter or darker than the one next to it. Brightness contrast between different shapes and objects can be determined by using a gray scale (Figure 11.1). A gray scale consists of a sequential series of gray tones from white to mid-gray to black. Pure white has virtually 100 percent reflectance and video white approximately 90 percent, whereas pure black has 0 percent and video black approximately 3 percent. The midpoint on the gray scale is about 18 percent reflectance—that is, about 18 percent of the light falling on this shade of gray is actually reflected back to the eye. To maintain adequate brightness contrast, dark letters and shapes should be placed on light backgrounds, and vice versa (Figure 11.1).

FIGURE 11.1 (A) Camera test charts contain a series of different lines, circles, and wedges of different thicknesses and positions in the frame. Such images enable the technician to both adjust the camera for maximum effectiveness and check the output of the camera. (Courtesy of DSL Labs.) (B) A color test pattern will show a variety of different colors as well as a gray scale chart containing a variety of specifically designed chips ranging from TV-white to TV-black on two strips. The center chip is pure black surrounded by pure white. This chart provides a standard against which a technician may adjust a video camera for maximum quality under the lighting conditions present and a means of comparing camera color output between cameras. (Courtesy of DSL Labs.)

image

Emotional Response to Color

Most designers believe that a general distinction can be made between warm colors and cool colors in terms of their emotional effect on an audience. Colors such as reds, oranges, and yellows create a sense of warmth in a scene. A romantic scene lit by firelight and surrounded by red, orange, and yellow objects on the set uses these warm colors to enhance a romantic mood. Caution should be used with reds and yellows in video recording, because video noise can occur in these colors on repeated generations of an analog videotape. Colors such as blue and green, on the other hand, are often considered to be cool colors. They are sometimes used to enhance a sense of loneliness or aloofness in a character, or a general mood that is related to a lack of human as well as physical warmth.

Cool colors tend to recede, whereas warm colors tend to advance. For example, pure hues of greenish yellow, yellow, yellowish orange, orange, and orangish red tend to advance and call attention to themselves, whereas pure hues of violet, red violet, blue, and blue-green tend to recede. Warm colors can convey a mood of passion or action, whereas cool colors tend to reinforce a sense of passivity and tranquility. The colors of sets, costumes, and graphic images must be selected with an eye toward their visual prominence, whether they recede or advance, as well as the degree to which they contrast with other colors. Colors that are repeatedly associated with specific objects, people, and settings can take on symbolic or thematic meaning. The red dress of a character in a drama can be used to signify sensuous passion. This color might contrast with the cool green or blue colors associated with a competitor for the affections of a male character.

Cultural Response to Color

Color symbolism also varies with different cultures. For example, the color white (or lack of color) to Japanese viewers may signify mourning, but for viewers from Western cultures, white often signifies purity and hope. The same color may have different connotations depending on its use in a specific film or television program. The color yellow may mean cowardice, sinfulness, or decay, yet it also can carry the meaning of spring, youthfulness, and happiness. Both blue and green also carry various meanings depending on people’s cultural backgrounds. The use of color must be carefully considered based on the expected or targeted audience and their cultural background and traditions.

Composition

A graphic artist organizes basic design elements by using principles of composition within the limitations of the visual frame. These principles can be applied to any visual design problem, including computer graphics and the arrangement and selection of on-set, off-set, or digitally designed graphics. They are concepts employed by designers in many other fields as well (see Chapter 7, Lighting and Design).

Balance

A design is balanced when there is an equal distribution of visual weight on each side of an imaginary centerline bisecting the image. Balance or equilibrium enhances unity and order. There are at least four different types of balance: symmetrical, asymmetrical, radial, and occult. Symmetrical balance consists of a mirror image of one half of a design in the other half. Identical but reversed elements are arranged on either side of the axis line, which seems to cut the design in two. Asymmetrical balance does not have completely identical elements or mirror reflections on either side of the axis line, but the weight or size of the elements on both sides is nonetheless equivalent. Asymmetrical balance permits a higher degree of variation and viewer interest than symmetrical balance. In radial balance, two or more similar elements are placed like the spokes of a wheel about a central point. This creates a strong sense of motion or movement around this point, while preserving balance. Occult balance is a sense of equilibrium achieved through the placement of unlike elements. Balance is intuited without reliance on conventions or rules. There is usually a strong sense of movement and a dynamic quality to the design (Figure 11.2).

FIGURE 11.2 Graphic designs may be laid out in a variety of patterns. They may be symmetrical (exactly balanced on each side of the frame), asymmetrical (balanced visually on each side of the frame, but not exactly matched), radial (a pattern balanced around a central figure), or occult (without any obvious balance or symmetry).

image

Perspective

Perspective refers to the arrangement of various elements to draw attention to the most important aspect of the image, which is called the focal center. A common focal center is the main visual element, but more abstract aspects of a frame can also function as focal centers. Designers rely on a number of basic principles of perspective, such as proximity, similarity, figure/ground, equilibrium, closure, and correspondence. All of these principles are based on the common ways in which our eyes and minds attempt to organize visual images (Figure 11.3).

FIGURE 11.3 Forcing perspective in a graphic design develops a feeling of depth along the Z-axis (leading in and out of the frame—toward and away from the viewer), which does not actually exist in a two-dimensional medium. A design appears to have three dimensions by making some objects look larger while others appear smaller, or by appearing to converge toward the background.

image

Proximity

Objects placed near each other form common groupings. Conventional wisdom has it that graphic information should be grouped into common topics within the frame for greater intelligibility. It is unwise to try to pack too much information into a single graphic image. A second graphic frame is usually required when another topic is introduced or there is a great deal of information to convey about a single topic (Figure 11.4).

FIGURE 11.4 One way of graphically indicating that a group of objects belongs together is to group them close together in an obvious pattern. The cylinders appear to belong together, and the boxes do not.

image

Similarity

The perception of similarity between shapes and objects in a frame provides another means by which graphic images can be organized. Objects with similar shapes, sizes, colors, and directions of movement are united into common groups. Any deviation from this similarity, such as a runner moving in the opposite direction from the pack or a red object in the midst of green objects, draws immediate attention on the basis of its lack of similarity (Figure 11.5).

FIGURE 11.5 By grouping similar objects together in an obvious pattern, any other object, even if similar, that is not in the pattern will appear to move away from or at least not belong to the similar grouping.

image

Figure/Ground

Figure/ground refers to the relationship between backgrounds and foregrounds. Our eyes try to organize visual images into background fields and foreground objects. Some visual illusions are ambiguous, and we can alternate the foreground and background to create different shapes and objects from the same picture. A corporate logo or graphic marks that consist of letters and words, such as Eaton Corporation or PlayMakers Repertory Company, combine white and black letters that reverse figure and ground. The reversal in the PlayMakers logo suggests a rising curtain that is consistent with its theatrical subject matter. Symbols and signs that use figure/ground relationships can be effective means of gaining audience attention and communicating ideas (Figure 11.6).

FIGURE 11.6 A figure/ground illustration. The design conceals which part is the background and which is the foreground by alternating black and white within the type and the background. (Courtesy of the Repertory Company.)

image

Equilibrium

Another way in which our eyes try to organize graphic images is through a principle of equilibrium. An image in equilibrium is logically balanced and ordered. Equilibrium can be based on natural scientific laws, such as gravity or magnetic attraction, as well as a balancing of object weights and sizes on either side of a centerline in a frame. This organizing principle reflects a well-ordered, logical universe. When images defy a sense of balance or accepted physical or scientific laws, they are in disequilibrium, which can arouse interest but also cause distracting confusion (Figure 11.7).

FIGURE 11.7 A triangle or other object with a broad base indicates a graphic arrangement of equilibrium, giving the arrangement a stable, firm graphic appearance. An arrangement with a smaller bottom or an object leaning without any visible support gives the audience a feeling of being off-balance or very unstable.

image

Closure

Viewers have a natural tendency to try to complete an unfinished form, a principle that is called closure. An open form is ambiguous and leaves some questions unanswered. A partially hidden form can still be identified because we expect good continuation of a form off-screen or behind another object, but this is a projection of our need for closure onto the image. A designer can frustrate or fulfill our desire for closure by completing graphic forms or leaving them partially incomplete. The former seem stable and resolved, whereas the latter seem unstable, although they sometimes stimulate creative and imaginative impressions (Figure 11.8).

FIGURE 11.8 The psychological condition of closure actually arises from an ambiguous or incomplete graphic that is designed to allow the viewer to fill in the rest of the picture through prior or common knowledge. A single house by itself will appear to be just that, a single house; but a row of houses may be depicted by only two houses, one on each side of the frame only partly visible. The viewer will fill in the rest of the houses and assume that there are more houses out-of-sight on each side of the frame.

image

Emphasis

Brightness and contrast, size and placement, and directionality are devices that help create emphasis. Generally, a bright object attracts attention more readily than a dark object. Our eyes are drawn immediately to the brightest part of a design. However, if the image is almost completely white, emphasis can be achieved by using contrasting darkness for an object. Objects in contrasting colors can create emphasis. Because warm colors advance and cool colors recede, emphasis can be created by using contrasting reds, oranges, or yellows for important objects.

The size or dimension of an object and its placement within the frame can also create emphasis. In general, large objects attract more attention than small objects. However, if most of the objects in an image are large, then a single small object is emphasized by virtue of its deviation from the norm. The placement of objects in a frame can also create emphasis. Closer objects are usually more prominent than distant objects. If several objects are grouped together, the one that is set apart acquires emphasis through variation and contrast. An isolated, individual object can be singled out from a group and thus be emphasized. If the single object outside the group is also different in size, brightness, or color from the members of the group, that emphasis is reinforced. One of the most common forms of directional emphasis is created by the use of converging parallel lines that direct the eye to a specific object. These lines enhance the illusion of perspective and depth at the same time that they add emphasis. The lines can be formed by natural objects, such as a row of trees or a road leading to a house, for example. Many different lines and shapes can direct the eye to various parts of the image, focusing attention in the desired direction.

X-Y-Z Axis

The three-dimensionality of reality is created in either a video or film frame with a two-dimensional reproduction. To give the impression that the picture represents the 3-D world, an understanding of how the three dimensions relate to the frame is necessary. The movement or composition along a line running from left to right or vice versa is considered the X-axis. Any movement or composition running from the bottom to top or vice versa is considered the Y-axis. The Z-axis does not actually exist in a two-dimensional medium, but it can be depicted or created through the use of compositional arrangements within the frame. If objects are arranged at an angle, instead of straight across the frame, or a series of objects diminish in size as they rise in the frame, a Z-axis is created. To avoid boring or static pictures, efforts should always be made to create a Z-axis in each sequence.

Readability

The size and amount of detail in an image affects readability, which refers to the ease of deciphering and comprehending graphic images. The size of a typeface or style of lettering, for example, is an important determinant of how easy it is to read a graphic image. Type of extremely small point size is usually avoided in video production because small titles are difficult to read on a film or television screen. Point size refers to the height of letters; the higher the point size, the larger the letter. (The text you are reading is in 10-point type.) Lettering sizes smaller than 1/15 of the full picture height should be avoided in television graphics (Figure 11.9).

FIGURE 11.9 In television production, partly because of the relatively low resolution of a home television receiver and partly because some of the audience may be watching on a small-screen receiver, graphic material in type form should not be smaller than 1/15th of the scanned height of the graphic.

image

Graphic artists also avoid finely drawn lettering and serifs, which are delicate decorative lines that are often difficult to reproduce. Because of the limited size, resolution, and sharpness of television images, boldface type is recommended for titles and subtitles. Plain backgrounds give prominence to foreground titles and lettering. A highly detailed or multitoned background is distracting. Good contrast between foreground and background tones and colors is essential for legibility. When titles are keyed over live-action images, bright lettering should be used, preferably with some kind of border, drop shadow, or edge outline, which gives greater legibility and three-dimensionality (Figure 11.10).

FIGURE 11.10 No font or important graphic should be framed in front of a busy background or a background with many small elements. If such a background must be used, the graphic can be framed within a plain box by defocusing the background or increasing the lighting contrast so that the important graphic will stand out and be clearly visible.

image

Image Area

An important determinant of composition in visual graphics is the aspect ratio or frame dimensions of the recorded and displayed image. As noted earlier, frame dimensions vary in television and film. The aspect ratio, or proportion of height to width, of standard television images is 4:3 or 1.33:1. The aspect ratio specifications for high-definition television (HDTV) is 16:9 or 1.85:1; projected film images vary somewhat in terms of their aspect ratios, from 1.33:1 to 2:1 (Figure 11.11).

FIGURE 11.11 Motion pictures shot in the Academy Standard 4:3 ratio will be reproduced on standard television with full frame and no surrounding black bars. But 4:3 films or standard definition (SD) TV pictures on an HD wide screen will show black bars either on either side of the frame or the top and bottom of the frame will be trimmed to create a 16:9 aspect ratio. If a 16:9 or widescreen motion picture is broadcast on an SD monitor, the sides of the frame will be trimmed or the picture will be reduced and a black band will appear on the top and bottom of the monitor (known as the letterbox format). An HDTV 16:9 or widescreen movie will fit on a HDTV widescreen monitor without modifying the picture.

image

When HDTV images are viewed on a 4:3 standard television receiver or monitor, the viewer will either not see a portion of the image on both sides of the frame or the signal will need to be broadcast in letterbox frame. Letterbox framing refers to a widescreen image shown in its full width, but a narrow band of black across the top and the bottom of the frame fills in the areas that are not included in a wide-screen production. At one time letterbox was considered an unacceptable method of showing wide-screen productions, but with the advent of HDTV it is not only acceptable but has become fashionable, with commercials being produced intentionally in letterbox format (Figure 11.11).

Scanning or Full-Aperture Area

The scanning area is the full field of view picked up by the camera sensor. The full-aperture area is the equivalent in film of this area. It refers to the entire field of view recorded on an individual frame of film.

If a graphic illustration or title card is shot live in the studio rather than created as a computer-generated graphic, then it must be framed in the camera so that it is slightly larger than the actual scanning or full-aperture area; this ensures that the edges of the card do not appear in the frame. The scanning area should be about 1½ inches inside the outer edge of a 14-foot 11-inch or 11-foot 8-inch illustration or title card.

Essential Area

The essential area of the frame is the safe recording portion of the frame. Graphic information is placed within the essential area, so that it will not be cut off by somewhat overscanned TV receivers or film projector apertures. (Home TVs usually reproduce less than the full camera frame because the horizontal scanning is expanded.) The essential area of the video camera should allow at least a 10 percent border within the scanning area so that there is no possibility of eliminating essential information. If a graphic image falls within the essential area, an artist can be confident that all key information will be safely recorded and projected.

GRAPHIC DESIGN

Graphic design, like scenic design, is concerned with structuring pictorial content. Graphic images should be closely tied to overall scenic design, including sets and costumes. For example, the red titles and sepia-toned still photographs (black-and-white pictures with an overall reddish-brown color tone) at the beginning of Bonnie and Clyde (1967) foreshadow the violence and bloodshed to come and establish the 1930s’ setting of the film through costuming and props in each photograph. A good graphic design organizes visual information so that it can be efficiently communicated to viewers. Graphic designs organize many different types of information, including lettering and illustrations. Titles are often the first images presented on a videotape or film, and they must set a context for what is to follow. Graphic titles and illustrations answer questions about who, what, when, where, why, and how or how much. Graphic images often convey information more directly than speech and live-action images. They can boil down complex ideas into simple concepts, which are represented by shapes, words, or numbers. Titles and illustrations can clarify the ideas inherent in more complex, live-action images and speech.

Principles of Graphic Design

The best titles often are very simple. Trying to convey too much information at one time produces ineffective or unintelligible messages. Each image should convey one general thought or idea. Everything presented within that image must contribute to a central theme. A complex array of statistics can often be boiled down to a simple graph or chart. Titles and subtitles that clarify visual information or give credit to contributors must be clear and concise. Good titles and subtitles do not crowd the image, yet title size is often used to convey their relative importance. Simple images are generally more intelligible images. They eliminate confusion and frequently have great aesthetic and emotional impact.

Types of Graphics

Graphics can be divided into two different categories on the basis of their placement and use during production: on-set and off-set graphics including computer-generated graphics.

Off-Set Graphics

Off-set graphics are generated somewhere other than with the live studio camera in the same studio as the production. They may come from a title card, computer generated image, videotape, or a digital image fed directly into the switcher from a computer.

Computer Graphics

One of the most promising applications of computer technology to film and television production is computer graphics. The advantage of computer graphics systems, like that of the character generator, is that the images do not have to be recorded by a camera and individual frames can be digitally stored. Computer-generated images (CGIs) and graphics applications offer a wide range of fonts, font sizes, colors, and backgrounds. Movement, such as crawls, rolls, or digital transitions, is limited only by the particular model of computer. In addition, because the files are stored digitally, they can be called up and used immediately. A large number of files can be entered in advance of a production for insertion at the proper time and place (Figure 11.12).

FIGURE 11.12 A character generator (CG) is a relatively simple computer graphics machine designed to create lines of type, including numbers, and simple color graphics backgrounds or lines of type to be keyed over other video frames. (Courtesy of Scitex Digital Video.)

image

A variety of hardware and software systems are currently available for use in video and film production. Most systems allow the operator or artist to control all of the elements of graphic design, including line, shape, and color. Images can be created directly on the screen, using a light pencil, stylus, mouse, or keyboard. The artist can select and control various lines and shapes, as well as image size, color, and placement on the screen. It is also possible to use a stylus to trace a hand-drawn sketch or outline so that it can be computer manipulated and stored in disk form. Some computers have frame-grabbers, which allow the computer to manipulate a single video frame from a camera or VCR. Some computer software allows graphics programs to be integrated with animation programs to create apparent motion. Images can be placed in disk storage and accessed at any time. A graphics computer that has an SD or HD video signal output can be fed directly to a VCR or a switcher.

Partially because of convergence, a graphic designer must diversify to learn and use many skills and techniques handled in the past by individual operators. Now a designer must be familiar with all aspects of the field, including basic artistic principles (the same in either analog or digital), typography, photography, motion pictures, video, audio, animation and visual effects (VFX), and web technology. Basic concepts of design are unaffected by digital technology. The designer still must conceive and develop ideas and understand graphic principles and solutions, as well as composition, despite the wide scope of tools for exploring graphic concepts faster, cheaper, and over a wider range of solutions.

Graphic Applications

There are two basic types of graphic applications: bitmap and vector formats. Bitmap applications include Photoshop, Painter, and CorelDraw. Bitmap images are made up of a fine grid of individual pixels. Each pixel may be a different color. The combination of pixel color intensities in red, green, and blue will give the desired color, just as the screen on a video monitor provides many different colors from the combination of different intensities of the red, green, and blue guns in the cathode ray tube (CRT). To produce a graphic in bitmap, the application provides a series of tools to modify or edit the arrangement of pixels. Some of the tools are paintbrush, pencil, airbrush, cloning, color adjustment, masks, filters, and the ability to build layers with different objects in each unique layer. Bitmap graphics may be as simple as a single frame of type or as complex as the removal of wires and an unwanted building in a science fiction motion picture like the Matrix series (2002—2003).

Bitmap Formats

image

Vector applications include Illustrator and Freehand. The vector graphic is determined by connecting strategically placed points on lines. Moving the lines at the point of contact creates a mathematical formula that determines the shape of the figure. The line may be connected, forming a shape that may be filled with color, textures, or gradients among other aspects. Each object is an individual item in the frame. The curves created by bending the lines into forms are called Bezier curves. Text can be converted to vector files and shapes and may be grouped and locked together. Layers and filters are available to build special forms and images. Vector files tend to be smaller than bitmap files.

Typography

The type fonts in computer graphics are based on historical terms spacing used in the print industry. The measurements are points (pt), pica, and inches: 12 points equal 1 pica; 6 picas equal 1 inch. So 12-pt type in the print world is ⅙ of an inch; 72-pt type is 1 inch. But a computer screen is not necessarily the same size as a graphic printed from that file. Experience leads the designer to know the point size of their fonts, but the sizes are relative to everything in the frame. Computer graphic artists use points for type size, picas for column width, and inches for resolution (dpi) dots per inch. The space between lines is called leading, and the space between letters if adjusted is called kerning.

image

SEARCHING THE INTERNET

Browsers are applications that provide a means of searching or “surfing” the Internet. Browsers find a page by looking for that page’s unique universal resource locator (URL). Once found, it displays that page on the screen using the hypertext markup language (HTML) instructions contained on that page. The page will appear instantaneously unless complex or moving graphics are included. Additional software like QuickTime or RealPlayer may be needed to translate video or flash for animations. The two major browsers are Safari and Internet Explorer.

Hypertext Markup Language (HTML)

HTML code is invisible to the viewer of a web page, but the code is buried within the page file, providing the instructions for the background color, font size, and object positioning in the frame. The code as written appears confusing but is very logical. Each line is preceded and followed by a “tag” that tells the browser how the line should be displayed. HTML may be written directly using text editors or by using a WYSIWYG (“what you see is what you get”) program like Dreamweaver or GoLive. Graphics for the web are prepared in an image editor like Photoshop or a layout program like Quark Express or Freehand. Flash and ShockWave programs create animation and sound files to be embedded in a web page. The speed at which a viewer can download a web page depends on the size of the files and the method the viewer uses to download from the Internet. A 56 K modem will be very slow, but any of the broadband systems will download files much more quickly.

Interactivity

Interactivity is a relationship between the computer and the operator. Everything we do on a computer is a form of interactivity because the operator tells (asks) the computer to respond with some kind of an action visible on the screen. Web sites offer extensive interactivity with the operator given the opportunity to explore, modify searches, and gain access to files. The web designer needs to know how to create hyperlinks that connect one page, file, or source with another. The process of creating hyperlinks varies with the application used to create the link text. In essence, the application is told to “link” and given an URL as the next item in the link. Java and JavaScript are two leading languages designed to add interactivity to web pages. Special items such as rollover buttons, image maps, games, and animated texts may be programmed with a Java application. Other applications such as QuickTime, MP3, Windows Media Player, and RealAudio are used to compress sound and video files to be streamed for web distribution. Streaming allows a continuous flow of information on the web to be downloaded and stored on the receiver’s computer drives.

Multimedia

Multimedia is the creation of audio and video and graphic programs distributed on a permanent medium rather than on the Internet. The technology used to create multimedia is similar to that of web pages. The major difference is between the restricted bandwidth of web pages and the much less restricted bandwidth of CD-ROMs and DVD systems. Multimedia programs produced for distribution on CDs are limited to the playback capabilities of individual computers. DVD multimedia programs deliver full color, motion, and 5.1 surround sound in uncompressed forms. Multimedia programs are produced using video and audio editing applications like Final Cut Pro and Director with animation and graphics added through After Effects and other graphic applications. The capabilities of editing programs for multimedia are constantly expanding, and at the same time the programs are becoming simpler to operate and cheaper to own.

The primary advantages of computer graphics systems are savings in time and convenience. In preproduction use, storyboards can be quickly and efficiently generated, and then modified immediately before actual production to mirror changes. Hard copies can then be printed for camera operators and other members of the production team. During production, illustrations such as charts, graphs, and drawings can be generated quickly and used immediately. Computer-generated graphics provide the background information for weather forecasts. Titles and lettering can be corrected immediately and then added to images to clarify the information they contain. All of this information can then be conveniently stored and accessed during production without using a camera. Images can also be modified efficiently and easily during production. Although sophisticated computer graphics systems are still very expensive, low-cost systems, such as those available with many home computer systems, can be inexpensively purchased and integrated into a television production facility.

On-Set Graphics

Set furnishings, props, costumes, and performer makeup are not completely independent elements in the production process. Elements of graphic design interact with each other and many other areas of production to create an overall visual impression. The most important interactions are those between graphic design and each of the following: lighting, performer movement, and important graphic elements. The most commonly used types of on-set graphics are handheld cards, photographic blowups, and three-dimensional graphic set pieces. Handheld cards are images that a performer holds up to the camera during a scene. The talent controls the timing and placement of this type of graphic illustration.

Still photographs can be blown up or enlarged so that they provide a convenient background or backdrop on the set. Such photographs should have a matte rather than a shiny or glossy surface so that they do not reflect a great deal of light, and they should be positioned so that no glare or reflection is directed toward the camera lens. Three-dimensional structures placed on the set for illustration purposes are called graphic set pieces. A graphic set piece could be an item to be demonstrated, such as a piece of machinery, or an art object. Most on-set graphics can be scanned and digitized ahead of time so that the framing can be precise and the camera is not tied up with a static shot unless it is necessary for the talent to handle the graphic or be part of the action involving the graphic.

Camera cards are usually placed on an easel, which is an adjustable display platform or graphics stand. The lights on the easel, which illuminate the card, are normally placed at a 45-degree angle from the card’s surface to minimize light reflection in the camera lens. When the cards are attached to the easel by rings, they can easily be flipped, while maintaining perfect registration for the camera.

It is also possible to zoom in to different elements on a card or photograph. This adds dynamic movement to static images. Dissolving from one card illustration to another is another common technique. The camera should record a card or illustration directly head-on to avoid keystone distortion. Keystone distortion exaggerates the size of the top, bottom, left, or right side of a card when the camera positioning is slightly off dead center (Figure 11.13).

FIGURE 11.13 Keystoning is the effect created by shooting a graphic at an angle rather than straight on. The closer edge of the graphic will appear to be larger, and the farther edge will appear smaller when in reality they are the same size.

image

A section of the studio wall that is painted blue or green as the background for a weather report is commonly used as a graphic set piece in television news production. The weather board allows various weather maps and figures to be chroma-keyed behind the weather reporter. A whiteboard on which the talent can write or draw is also a graphic set piece. During elections, tally boards are entirely digital in operation.

Lettering and Titles

Graphic images can be divided into two additional categories, titles and illustrations, on the basis of the nature of the images themselves. Titles are various forms of lettering that either accompany illustrations and live-action images or are presented as written text. They are created electronically on devices called a character generator or graphics generator. Illustrations are visual images, such as charts, graphs, and pictures. They can be hand-drawn, photographed, or produced with the aid of computer graphics equipment.

Lettering and titles are used to introduce the name of a film or television program and to list the credits or names of people who have contributed in some way to the production. The opening group of titles for a program is called a title or credit sequence (Figure 11.14).

FIGURE 11.14 To identify key performers, the name is typed on the character generator and then keyed over the medium close-up of the performer below his or her face but high enough in the frame to be visible to all viewers.

image

Another common use of titles is to clarify live-action images. Subtitles or name keys are titles keyed in the bottom third of the video or film frame, indicating the person or place being shown. Finally, lettering and titles can be presented as pure text, that is, without any other visual accompaniment.

Textual materials are used to convey written information in the form of electronic newspapers, advertising, or financial statements.

Credit or Titles

Credit and title sequences present an opportunity for creative, abstract expression on the part of a graphic artist. They are carefully designed to communicate the central message and feelings of a film or television program. The opening credits or title sequence offer the audience an introduction to the basic subject matter of the program. It must arouse the audience’s interest, excitement, and curiosity. Titles should integrate well with overall scenic design. Graphic design and lettering styles should be appropriate to the overall subject matter.

ANIMATION

Animation develops imaginative worlds by using single-frame recording techniques to make static images and objects appear to move; whether the medium is digital files, film, or video, the philosophy and basic techniques are the same. By breaking the motion of an object down into its component parts, an animator can control the movements of otherwise lifeless figures and images. Single-frame recordings of static images can create apparent motion when small changes in the positioning of objects occur between successive frames. Thus, animation creates apparent changes in position. Only 24/30 different images may be required for each second’s duration of the final sequence, although single-frame versus double-frame animation is always a trade-off between smoothness and cost.

The animator’s job is to create the desired illusion of movement. Animation is based on an animator’s knowledge of time and motion. An animator must be able to break down motion into its component parts so that it can be artificially constructed out of static images. One of the best means of analyzing motion is to examine the individual frames of a live-action film. A live-action motion picture camera, for example, records 24 (25 in Europe) frames every second at standard speed. Each frame represents

1/24th (1/30th in video) of the change in the subject’s spatial positioning during one second. By looking at the amount of change that occurs between the successive frames of a live-action sequence, an animator can begin to determine how much change there should be in the position and movement of objects between successive animated frames.

It is not always necessary to record a different image for each film frame, however. A smooth illusion of continuous motion can often be obtained by recording two identical frames of each image or drawing position. Thus, only 12 different images will be required for each second’s duration of the final sequence, rather than 24 images.

Storyboards and Animation Preproduction

An animated sequence often begins with the construction of a storyboard. A storyboard is a series of sequential sketches that depict the composition and content of each shot or key action in an animated sequence. A storyboard is similar to a newspaper comic strip. It helps a graphic artist or animator to visualize the entire sequence on paper before preparing the final images. The storyboard can be used to communicate the animator’s basic idea and strategy to a producer. It can also serve as a blueprint or guide to the actual creation and recording of images.

Many animators design their storyboards in conjunction with prerecorded music, sound effects, or voice tracks. Because timing or synchronization between sound and images is often crucial to the success of an animated sequence, music and sound are initially recorded and analyzed. All of the detail is entered into a log sheet (often called an exposure sheet or dope sheet) (Figure 11.15).

FIGURE 11.15 An animation exposure sheet, sometimes called a dope sheet, includes all of the information the camera operator needs to make the exposures and movements of the cels and to determie which cels to stack in the layer.

image

Types of Animation

Many different types of images and objects can be animated, including hand-drawn illustrations, paper cutouts, puppets, clay figures, still photographs of live actions, and computer graphic images. All of these forms of animation are based on single-frame recording techniques. It is often helpful to distinguish between flat and plastic animation, as well as between film and digital animation. Flat animation is two-dimensional (2-D) and includes such techniques as cel animation, in which individual illustrations are drawn for almost every frame of a picture. Plastic animation, also called stop-action or single frame animation, encompasses the use of three-dimensional figures, such as puppets or clay figures. Single-frame recording of people and three-dimensional objects is sometimes called pixillation. In a sense, all of these techniques or types of animation elevate the animator to the status of director, editor, and scenic designer.

Flat animation refers to the recording of two-dimensional images using single-frame recording techniques. One of the most common forms of flat animation is cel animation. Cels are individual sheets of clear acetate on which images can be drawn or painted, usually with ink and opaque watercolors. Cels are preperforated with holes at one end so they can be inserted over the pegs of a movable table, called an animation rostrum, for precise registration and framing. An animation stand consists of a rostrum, lights, and a movable camera platform (Figure 11.16).

FIGURE 11.16 Layers of individual cels allow the animator to move some parts of the character or background, but not all at once. Layers of cels also may create a third dimension and sense of depth to the frame. The top illustration shows three individual cels from the left to right: the two space characters, then the moving pattern on the monitors behind the crew, and last the space ship interior as a background. The bottom illustration shows the three cels locked together for the complete frame ready to be recorded.

image

Cel animation gives the animator or graphic artist complete control over the design of the image. However, drawing each frame individually on a cel can be quite time-consuming and expensive, so many shortcuts are used to conserve time. Because cels are transparent, they can be sandwiched together to combine images drawn on different cels. A background cel can be used over and over again while changes are made in the placement of foreground objects, eliminating the need to redraw the background for each frame. Individual movements of characters’ feet, hands, and mouths can be repeated or recycled with different bodies and backgrounds.

Another commonly used technique for cutting costs and increasing cel-animation efficiency is called rotoscoping. In rotoscoping, a sequence is first filmed in live action; the individual frames of the motion picture are then projected on a cel, and an outline of the objects in the frame is drawn and hand-colored. Subjects are normally photographed against a contrasting background so that outlines are clearly visible. The drawn outlines are then colored like standard hand-drawn animation cels. Although rotoscoping makes the production of cels more efficient, it often produces images that are less aesthetically pleasing than hand-drawn animation. Motion capture (MoCap) takes rotoscoping one step further (see explanation under “Motion Capture” later in this chapter).

Hand-drawn illustrations are not the only flat images that can be animated. Paper or fabric cutouts and still photographs can also be set into motion. A paper cutout of a person or animal can be constructed so that it has moving body parts. It can then be placed over a variety of backgrounds so that it seems to come alive and move on the screen. A flicker effect can also be achieved by recording frames of colored paper in between frames of specific photographs or illustrations. The change in photographs can be timed to the beat of music. In this way what might otherwise be a boring presentation of static images acquires kinetic energy. Still photographs and printed illustrations, such as magazine images, can be animated through single-frame techniques, such as those used by Frank Mouris in his famous Frank Film (1978). Mouris’s film is as much a feat of optical printing, discussed later in this chapter, as of animation (Figure 11.17).

FIGURE 11.17 Frank Mouris specialized in producing films shot single-frame, often using collages of unrelated images shot in sequences as short as two to three frames for a rapid, eye-teasing format.

image

Plastic animation refers to the animation of many different types of 3-D figures and objects using single-frame recording techniques. Puppets, clay figures, miniature vehicles, and even still frames of live action can be animated.

Although hand puppets and marionettes are usually recorded in live action so that the mouth and body movements can be synchronized to speech or music, it is possible to animate more rigid puppets and clay figures by moving them slightly between frames.

Unlike the animator of flat, two-dimensional characters, however, the plastic animator must create a miniature three-dimensional world of sets and props within which puppets and figures will move. Careful attention must be paid to minute details. Backgrounds must be painted to scale, and everything must be proportional to the size of the figures. The camera is usually placed in a horizontal position with respect to the scene rather than above it, as with an animation stand. Miniature vehicles, such as cars and trucks, can also be animated through single-frame techniques. Sometimes these animated miniatures are used as a substitute for more costly and dangerous stunts and special effects in live-action films.

An animated three-dimensional figure sequence is shot much like a live-action scene, except that the pictures are recorded frame-by-frame. More than one camera is frequently used so that action does not have to be repeated for different shots, as in single-camera live-action recording.

Human figures can also be animated by a technique known as pixillation. Images of human beings can be pixillated by recording one frame, moving the image, and then recording another frame. Pixillation has been used in many films to animate images of human beings so that they seem to perform extraordinary feats. In Norman McLaren’s famous film Neighbors (1955), two neighbors fight over their adjoining territory. This clever film offers a symbolic treatment of war by presenting a unique abstract image of human behavior and actions. In one scene, the human figures hover across the ground with no apparent movement of their limbs. McLaren achieved this image by photographing single frames of his subjects leaping into the air. Only the apex of each jump was recorded, making the people seem to hover over the ground.

Computer Animation

Computer animation programs are used for video and film productions. Virtually all commercials, all newscasts, and most television/cable and film programs use some form of computer animation or computer graphics. Some animation programs fully integrate graphics programs and animation programs so that still-frame graphic images can be used to create apparent motion. Graphic images can be originally designed on a computer monitor by using various computer commands and devices, such as a light pencil or stylus to draw on a television screen or an electronic tablet or a mouse to compose images on a computer screen. They can then be colored and manipulated by computer.

Live-action frames can also be grabbed or digitized by some computers for further graphic manipulation or combined with computer graphic images. Single-frame graphic images can be stored on disk. These images can be expanded in size for detailed work and then shrunk to a smaller size for actual presentation. The animator can manipulate the colors, lines, shapes, and size of the image. Motion is created by cycling different movements and using the computer to interpolate intermediate frames of motion between two static frames.

Computer animation programs allow for interpolation, another form of animation that uses paths and involves the drawing of lines through 3-D space. Using interpolation, the animator composes the first and last frames of a sequence, referred to as the key-frames, and the computer software then creates or interpolates the in-between frames Computer animation allows an almost infinite number of repetitions of the same image. Image cycling is facilitated by simply drawing the first and last frame of a sequence, interpolating the rest, and then recycling this sequence wherever it is needed.

Rendering is the final step in 2-D and 3-D animation. It is often the most time-consuming and memory-intensive stage of computer animation. The end product of rendering is the creation of a graphics file that can be combined with other graphics files to collectively produce the completed animation sequence. The time and memory required for rendering is often extensive, but it can be reduced by using shortcuts for color, procedure (mathematical approximations of “natural” patterns, such as marble or clouds), and texture (a graphic drawn on an object, such as a soft drink label drawn on a can) maps applied to images during the rendering process.

The greatest advantages of computer animation are speed and accuracy. Results are immediately viewable. An animator need not wait a day or a week for the film animation to be processed and printed at a laboratory. Images and frames can be quickly designed and accurately copied. They can be stored on disk for long periods of time and used again or redesigned for another animation sequence.

A completely computer-controlled illusion of three-dimensionality can also be obtained in films that combine live-action characters with computer-generated objects and backgrounds, such as Tron (1983) and Who Framed Roger Rabbit (1988).

The live-action subject is usually recorded against a blue screen or a monochromatic background so that it can be keyed or matted into a computer-generated scene. The availability of these combined animation and special effects techniques has allowed graphic artists to save time and experiment creatively with abstract visual images for film and video (Figure 11.18).

FIGURE 11.18 The Touchstone production of Who Framed Roger Rabbit (1988) combined live-action, cel-drawn animation and computer animation sequences in a startling yet realistic manner. (Courtesy of Touchstone Pictures and Amblin Entertainment, Inc.)

image

3-D Computer Animation

The differences between 2-D and 3-D computer animation involve the complexities of creating figures with a Z-axis dimension. The standard method is to follow the storyboard stage of 2-D drawing with the design and creation of a wireframe model of the figure. The wireframe is made up of a series of polygons that approximate the three-dimensional shape of the object. The wireframe model is then smoothed and rounded to a more realistic shape by creating the “skin” or outer surface of the object. Textures, shading, and lighting are added to enhance the 3-D effect. Once the figure is complete, the digital file that represents that figure must then be rendered, just as 2-D animation figures are rendered to a final form that may be output to film or video for combination with background, other figures, and added movement; the final form may also be distributed.

Motion Capture

Motion capture (MoCap) is a logical computerized extension of film rotoscoping philosophy. Subjects are wired with sensors located at critical points on the body. The sensors either emit a signal to a remote receiver or are wired directly to a computer with a special program that combines the position of each of the body parts with an animated character. As the subject moves arms, legs, head, or other body parts, the animated character moves the same amount and direction. The movements are accurately recorded in the computer program, allowing the animation to progress in real time as the actor(s) move. The process is not only a rapid means of animating movements, but also the body movements are accurate and realistic. The technology is still under development, and some animation purists do not totally accept MoCap as a legitimate form of animation.

Animation on the Web

Because of the limitations of the delivery systems on the web, web graphics and animation must be carefully designed so that they do not exceed the channels of delivery. Full-color, full-sized, moving images do not reproduce well on systems driven by the slower modem. As systems designed to move data at a higher rate are more widely used, the quality of animation on the web will improve. Today web art borrows from all forms and modes of animation and graphics. Two- and, in limited cases, three-dimensional graphics or animation can be downloaded with patience, plenty of memory, and a speedy delivery system. By comparison to other visual media, web art will remain somewhat primitive but full of opportunities for experimenting and plowing new artistic ground.

Flash

Flash is a multimedia, web-based graphics application using a frame-based timeline that allows keyframing and tweening. Keyframing is an animation technique used to simplify and speed up the animation process. A motion sequence consists of a starting framing in a selected position and an ending frame in another selected, usually different position. The two keyframes are drawn and noted in the program, a direction of movement and change in position or size if required is indicated, and then the program will “tween” the rest of the frames in-between to keyframes by doing the drawing and positioning. A file is created, saved, stored, and publish in a browser much like any web file application, but with few HTML codes. Flash requires minimum programming skills, some knowledge of HTM, some knowledge of basic computer animation, but is nearly WYSIWYG (“what you see is what you get”). The program is much faster than other animation programs and allows interactivity and buttons for insertion of other material. The application supports bidirectional streaming interactivity through the application ActionScript, which is similar to, but different than, JavaScript a commonly used web application.

Drawing images on flash uses vector graphics like Illustrator, which gives a clean image that can easily be manipulated including reducing or enlarging forms and shapes without losing clarity or quality of the image. Illustrator may be used with one of two levels of flash, either Lite or CS3 Pro for the more complex projects. Colors is easily accomplished, changing’s shapes much like using Illustrator controls. Sound may be added and edited within the program or created externally and downloaded to the flash file as MP3 files.

Film Animation

Film animation requires the use of a camera that records single frames of motion-picture film. The camera is normally suspended above the artwork by mounting it on an animation stand. An animation stand consists of a camera platform attached to vertical poles or columns, so that it can be raised and lowered over the artwork. The camera platform is suspended above a large horizontal table, called a rostrum, which can be moved east, west, north, and south. The artwork is secured to this horizontal table by placing the hole perforations in proper registration over peg bars on the table.

A film animation camera is normally equipped with special controls for specific animation effects. A variable shutter, for example, can be used to fade out from or fade in to a specific piece of artwork. By rewinding the film to the beginning of a fade-out as indicated by the frame counter on the camera and then fading in on another piece of artwork, a dissolve can be created. Superimpositions are made by backwinding and double-exposing individual frames.

Images are electronically animated by placing a video camera on an animation stand and recording single video frames of the artwork on the table using the same techniques as used for film animation. An important difference between video and film animation is that the video camera records 30 frames per second (fps) instead of 24 fps. Single-frame video animation requires the use of a slo-mo (slow motion) recorder or a disk frame-storage unit, rather than a conventional VCR. A slo-mo recorder or video animator can be used to record individual frames for a 30-second animation sequence. The sequence can then be transferred to conventional videotape. A disk frame-storage or memory unit, such as that used to store pages of text and titles composed on a character generator, can also be used to record individual animation frames. Some memory units have a limited storage capacity, however, and others are capable of storing and immediately accessing hundreds of figures or pages.

One advantage of recording animation electronically is that the results can be viewed immediately. Film animators frequently have to wait several days or longer to see the results of their work. Video animation and film animation can be combined by recording a pencil test with a video camera using disk storage of single frames and instantly viewing the results on a monitor so that problems can immediately be uncovered and corrections made. The final cels are recorded with a film camera for optimum quality and maximum storage capacity.

SPECIAL EFFECTS

Special effects is in many ways a highly specialized area of media production. Producing most realistic effects was usually quite laborious and expensive in the past. Today, the widespread use of complex and convincing special effects in low-budget productions has been encouraged and simplified by the availability of relatively inexpensive digital image-processing programs that are built into many video cameras, as well as much digital nonlinear editing and special effects computer software. This section provides a broad survey of both traditional and contemporary special effects that are widely used in film, video, and multimedia production.

Special effects (visual effects [VFX] and sound effects [SFX]) can be divided into six basic categories: digital effects, camera effects, optical effects, models, miniatures, and physical effects. Camera effects include such features as fast and slow motion as well as single-frame (animation) recording. Optical and digital effects run the gamut of image-processing techniques, from matting and keying (where a portion of one image is replaced by another) to morphing (transforming one object into another, which is also called metamorphosis) and compositing (placing different layers of images on top of another). Models and miniatures, when combined with single-frame animation as well as matting and keying effects, can be used to put an object, such as a spacecraft, in motion or to create the illusion of a later century city by placing futuristic buildings into an existing location. Makeup can transform an actor into an android, a zombie, or a werewolf, whereas physical effects, such as fog, rain, and explosions, can contribute to the emotional mood of a sequence and generate viewer interest and excitement.

Digital Effects

Digital image processing has greatly reduced the amount of generational loss in image quality that has traditionally accompanied the creation of conventional film and video special effects. Digital effects can be divided into three areas: transitions, filters, and superimpositions, keys, or mattes; compositing; and morphing. Transitions are means of replacing one digital clip, which is usually a single image or shot, with another. Filters are means of altering a clip. Superimpositions, keys, and mattes are combinations of more than one clip that appear simultaneously within the same frame. Compositing involves combining different layers of visual information that can each be separately edited and animated using a digital nonlinear editing and an animation program, respectively. Morphing or morphogenesis refers to various techniques of transforming one shape or figure into another.

The types of transitions that can be created between one visual clip and another are virtually limitless. Most digital nonlinear editing and special effects programs offer a wide variety of transition devices as well as the capability of modifying the devices and creating custom transitions. Two of the most commonly used transitions are various dissolves and wipes.

During a wipe, one clip is entirely replaced by another clip, beginning in a specific area or several areas of the frame and gradually spreading throughout the frame. One clip can rise like a curtain, while the next is revealed behind it, or one clip can appear to push another off the screen. A wide variety of patterns, from pinwheels to clock hands, can be used to wipe from one clip to the next, and the movements of these wipe patterns or shapes throughout the frame can usually be adjusted.

A variety of digital filters allow a clip to be distorted, blurred, sharpened, smoothed, textured, and tinted or colored. Filters can also be used to pan, zoom, reverse motion, slow down, speed up, and flip a clip. The ability to alter the brightness, contrast, and color balance of individual clips allows an operator to function as a timer by smoothing out and eliminating subtle differences in color, brightness, and contrast between successive clips or shots or to function as a special effects artist by radically altering the original image and generating unusual and interesting effects. For example, all colors except the color red can be removed from a clip so that the entire frame is black and white except those portions of the image that are red in color. A clip can be blurred to simulate the point of view of a character whose vision has been altered. Various mosaic grids of squares or other shapes can be used to create interesting patterns, and an image can be posterized by limiting the color spectrum to just a few colors, or it can be solarized by blending negative and positive images to create a halo effect. By resizing clips, unwanted areas within the frame can often be eliminated, and zooms and pans can be created on stationary or moving images within a clip.

Compositing refers to combining different layers of visual images. One layer might be a model of a rocket ship moving against a neutral background as though it is taking off. Another might consist of an actual launch site recorded at Cape Canaveral. A third layer might consist of digitally animated images of ice particles falling through space. Each of these layers can be separated, edited, and combined into a composite image using a special effects program. This type of special effect was used to create composite images that simulated an Apollo spacecraft taking off in Apollo 13 (1995).

Using these same techniques, motion clips, such as moving or talking lips, can be inserted in place of stationary images, such as the stationary lips of a character in the background scene, in order to animate a live-action image. Similar types of digital effects have been used in Hollywood feature films, such as Forrest Gump (1994), to allow prerecorded documentary images to be combined with studio recordings and to have historical figures appear to interact with and talk to fictional characters.

Morphing or morphogenesis refers to various techniques of transforming one shape or figure into another. Morphing can be accomplished in film animation by drawing individual cels that gradually change from one shape or form into another. Digital image processing and animation programs have facilitated the transformation process by allowing the computer to generate the gradual transformation from one figure, such as an automobile, into another, such as a tiger. Morphing is an effective technique for creating transitions from one image to another or for altering shapes, forms, and figures and creating imaginative worlds through the use of special digital effects.

Camera Effects

A significant number and range of special effects can be created within a film camera during initial recording. Some film cameras, for example, allow the frames-per-second speed to be altered from the normal sound speed of 24 fps. Recording rates in excess of 24 fps, such as 32, 48, and 64 fps, create slow motion when the processed film is projected at the standard projection speed of 24 fps. Recording rates less than 24 fps, such as 18 and 12 fps, create fast motion. Attaching an intervalometer to a film camera allows the frames per second to be significantly reduced to one frame every 1, 2, or 20 seconds, or even every few minutes or hours, to create time-lapse recordings, such as images of clouds rolling overhead or flower petals opening and closing throughout the day. Single-frame control of a live-action film camera allows for pixillation effects, which are described earlier in this chapter. Most digital cameras duplicate these same effects in camera also.

In-camera matte effects are created by blocking off a portion of the frame during the first exposure, rewinding the film, and then exposing the previously blocked portion of the frame. Matte boxes can have half of the frame filled with an opaque black filter, which is then reversed to cover the opposite half of the frame to create a split-screen image. First one side of the frame is exposed, and then the film is rewound and the opposite side of the film is exposed. In-camera mattes, which are finely cut out of metal, can also be inserted behind the lens closer to the focal plane. An actor can then play two different roles within the same camera frame. This is done by first filming the actor on one side of the screen, rewinding the film, and then filming the same actor on the opposite side of the screen. Painted skylines and other scenic additions can be made using the same in-camera matte process by dividing the frame horizontally rather than vertically. Filters can also be placed over a lens, such as a gauze or haze filter, to create a softer image. Cinematographers often carry a variety of transparent materials with them on location that can be used to diffuse the image during recording.

Video cameras can also provide built-in special effects controls. For example, fade-ins and fade-outs can often be created automatically at the beginning or end of a shot by depressing a fader control. Some cameras allow the speed of motion to be varied to create slow motion, fast motion, and time-lapse recordings. Other in-camera special effects include various forms of digital image processing, such as image patterning, blurring, solarization, and other visual image manipulations and distortions. Again, many of these in-camera effects can also be created during postproduction, such as digital nonlinear editing, although some experimental videographers prefer to create these effects during initial recording.

Optical Effects

One of the advantages of creating special effects during postproduction is that they can often be more carefully controlled at this stage than during the production stage. Mistakes made during production are often costly if a scene must be reconstructed and actors reengaged. Postproduction special effects are often “added on” to the initial recordings and rarely require the initial scene to be reshot.

A variety of optical film effects are still widely used today, including step printing, traveling mattes, and aerial-image printing. An optical printer is needed to create many special effects on film. A basic optical printer consists of a camera and a projector. The two machines face each other, and the lens of the camera is focused on the image from the projector. The camera and projector can be moved toward or away from each other to increase the size of the image. An optical blowup can be created by using a larger-format and a smaller-format projector. Using a smaller-format camera and a larger-format projector creates an optical reduction. The camera and the projector must be precisely positioned so that the full frame of picture in the projector fills the full frame of picture in the camera.

Optical flips can be achieved by simply rotating elements within special optical printer lenses. Freeze frames are made by exposing many frames in the camera while holding the same frame in the projector. Stretch printing slows down or retards the perceived action by printing each frame more than once. Skip printing is often used to speed up a slow-moving sequence by recording every other frame of the original film.

Wipes, split screens, and optical combinations of animation and live action involve the creation of special traveling mattes. Mattes consist of special high-contrast, black-and-white images that are made from normal film images or from artwork. For example, suppose a color title must be inserted into a background scene. The two images cannot simply be superimposed on one another, because the colors will bleed together rather than producing solid lettering. A black-and-white, high-contrast copy of the titles can be made, so that the black letters will block out the portion of the background image where the colored letters are to be inserted.

The optical printer must have three projectors to do this: one for the background scene, one for the matte (unless the matte and the background scene are bi-packed, or run physically in contact with each other in one projector), and one for the color titles. The combination of the three images is then recorded by the camera.

Wipes and split screens can be made from similar traveling mattes, which block out a portion of the screen into which a second image is then inserted. It is possible to combine live action and animation by using traveling mattes in this manner. One sequence can also be recorded against a blue or black screen so that another sequence can be inserted into the blue screen area. Many special effects in science fiction and horror films are achieved by using a blue screen process. Spaceships are often recorded as they move in front of a blue screen. This blue screen portion of the frame is then used to create a matte that blocks out the area of the frame where the spaceship should appear in a highly detailed background scene with stars in outer space. The spaceship is then inserted as a foreground object into this area.

Aerial image photography combines optical printing and animation by using a film projector with an animation stand. Live-action images can be projected from beneath predrawn cels, so that color titles or animated figures can be combined with live action. The opaque portions of the cel block out the background scene, which is projected underneath it so that the titles are superimposed over the background scene. The film camera suspended overhead records the combined image. Aerial image photography eliminates the need for special intermediate mattes, such as those that are used during film printing to block out or blacken areas of the frame into which titles and other images are to be inserted. However, aerial imaging requires bright projection illumination.

The choice between doing special effects on film, videotape, or through a digital medium is often a difficult one to make unless it has already been decided to use film or videotape for an entire production. The obvious advantage of digital video is the savings in overall production time. Effects can be set up and viewed immediately, without waiting for laboratory processing. Electronic effects facilities normally have sophisticated computerized editing and switching equipment, so that several images can be run simultaneously. Keys and mattes can be created instantaneously. Careful preplanning must go into the creation of a special electronic effect before entering the studio.

Optical film effects are time-consuming to produce, but a very high degree of control and precision can be achieved through multiple passes of the same artwork with film. It is also possible to make sophisticated special effects in films with very low cost equipment. A basic optical printer, consisting of a simple projector and camera on adjustable platforms, can be purchased for a modest sum, allowing freeze frames, step printing, superimpositions, dissolves, and many other optical effects to be created.

Models and Miniatures

Full-scale models and reduced-scale miniatures are used whenever a three-dimensional object or setting is needed that either does not exist or when it would be too expensive or dangerous to use an actual object. A miniature may be required for a historical setting that no longer exists, or a full-scale model may be needed for a spaceship that hasn’t yet been constructed.

In some situations, such as when the camera must move within the shot, which requires a three-dimensional set to maintain realistic perspective, a painted two-dimensional background cannot be used to create the illusion of a specific location. In this case, a three-dimensional miniature of that location can be constructed to allow for camera moves. Usually when actions occur with a miniature, the camera records the action in slow motion to adjust for the difference in time-scale relations between full-scale and miniature environments. Motion must be reduced proportionately to the reduced scale of the miniature. When complicated miniatures and movements are required, it is often prudent to shoot test recordings at a variety of speeds before recording final takes or disposing of the miniature (Figure 11.19).

FIGURE 11.19 One of the methods of converting a solid object to a digital form is by tracing that form using a 3-D digitizer. The operator traces over all key surfaces of the object with a stylus until a full 3-D image has been transferred to a digital file. (Courtesy of Immersion Corporation.)

image

Effective use of miniatures and models requires careful preplanning, because these types of special effects are often extremely expensive. Similar types of coordination and planning are required, just the same as those that are demanded of an art director, director, producer, and cinematographer during full-scale, live-action production. Drawings are usually prepared and approved before miniatures and models are actually constructed.

Miniatures are difficult to record not only because of potential problems in perspective, scale, and speed of motion, but also because audience disbelief is often difficult to overcome. Larger-scale, highly detailed miniatures are usually required for longer shot durations where the audience will have an opportunity to carefully scrutinize them. When smaller, less-detailed miniatures are used, the editor often must keep the shot duration very short to reduce audience scrutiny and to maintain a willing suspension of disbelief.

Miniatures can take advantage of single-frame recording and matting or keying techniques to create apparent motion from stationary objects. A miniature spaceship or airplane, for example, can be recorded against a blue screen background while it is moved slightly along a suspended, invisible wire between each frame. Later a composite of the background scene and the moving aircraft can be made using photographic or electronic matting and keying techniques. Another advantage of miniatures and models is that they can be used to create inexpensive physical effects, such as various explosions, which would otherwise be too expensive to accomplish using actual objects and locations.

Physical Effects

Physical effects include wind, fog, smoke, rain, snow, fires, explosions, and gunshots. They require the guiding hand of a highly trained professional, especially when their use can endanger the safety of the cast and crew. Wind is usually generated by very large fans or aircraft engines and propellers whose speed and direction can be carefully controlled. Fog is often produced by combining smoke, such as from slow-burning naphtha or bitumen mixtures, and dry ice, which produces carbon dioxide. Most smoke-producing devices use either oil or water-based smoke fluid, which is heated above the boiling point to produce a gas that looks like smoke. Because all smoke is toxic to some degree, it should only be used in well-ventilated areas.

Rain, like wind and fog, is often used to accentuate a mood and atmosphere. Ground-level rain stands and overhead rain heads can be used to produce rain in limited areas, and the surrounding areas can be wetted down before shooting to sustain an illusion of a general rain shower. When rain effects are produced in a sound stage, it is extremely important to waterproof the floor, to have a means of drainage or water collection, and to avoid any contact between water and electrical equipment, such as lights, which could cause severe injury or even kill cast and crew members (Figure 11.20).

FIGURE 11.20 Most physical effects are the result of special effects crews using machines and chemicals to produce atmospheres and environments critical to the production, such as the Paramount crew on this Western set preparing to create wind. (Courtesy of Paramount Pictures.)

image

Snow can be created indoors or outdoors using a large, almost silent, wooden-bladed fan, which can also be used for rain effects, that is called a Ritter fan. Plastic snow can be dropped in front of the fan, usually from the top of the left side but never from behind (which would foul the blades and mechanism), by hand or a snow delivery machine. Polystyrene granules can be added to create a blizzard effect. Outdoors, a variety of materials, in addition to plastic flakes, can and have been used to create snow, including shaved ice, foam machines, gypsum, salt (on windows and window-sills), and aerosol shaving cream on slippery surfaces.

Although fire effects can add excitement and visual interest to a scene, they are also extremely dangerous and should only be created by skilled professionals who know how to contain them. It is extremely important to have fire extinguishers on hand that can control all three classes of fire: Class A fires, which burn solid combustibles such as cloth, rubber, wood, paper, and many plastics; Class B fires, which burn flammable liquids and gases; and Class C fires, which involve electrical or electronic equipment.

Explosions and pyrotechnics are the most difficult special effects to perform safely. They need to be set up and supervised by experts who are thoroughly familiar with the setting, detonation, and control of explosions and fires, because they are potentially dangerous to cast and crew members. The U.S. Bureau of Alcohol, Tobacco, and Firearms controls explosives and pyrotechnics. A federal license is needed to use explosives, and information concerning their use can be obtained from the bureau upon request. Most explosions and pyrotechnics require remote detonation.

Bullet hits are created by remote detonation of small explosive devices, called squibs, which are positioned over body armor or a hit plate that protects the actor or stunt-man. Blood packs, consisting of plastic bags containing corn syrup and red food coloring to simulate blood, and squibs are glued onto the back of the actor’s or stuntman’s shirt. Wires running down the pant legs are attached to the squibs at one end and to the firing box for remote detonation at the other end. Sometimes the wires have breakaway connectors at the ankles so that the actor or stuntman can break free of the wires just after the bullet hits have been detonated.

SUMMARY

Graphic design can be approached from realist, modernist, and postmodernist perspectives. Realist sets and design formats depict an actual or general type of place or experience. However, a realist setting can provide an atmosphere that reflects the subjective state of mind or perceptions of a specific character. Modernist designs are relatively abstract and often reflect an abstract conception of space, a subjective feeling, or a state of mind. Postmodernist designs combine a variety of design styles and patterns and emphasize emotional responses and an intentional distortion of realistic visuals.

Graphic design involves three basic design principles: design elements, color, and composition. Design elements include lines, shapes, textures, and movement. Color and contrast are interrelated aspects of design, as are color and shape. Contrasting colors can be used to separate foregrounds and backgrounds and to create various shapes, and they can be used to define specific characters, settings, and themes.

Graphic artists design images that convey information. They use basic principles of design, such as simplicity, proximity, similarity, figure/ground, correspondence, equilibrium, and closure, to stimulate viewer interest. Graphic artists select lettering that is highly legible but also expressive. Titles and illustrations are designed and selected on the basis of their appropriateness for specific topics.

Animation and special effects generate visual interest and can be used to create imaginative worlds. Animation develops imaginative worlds by using single-frame recording techniques to make static images and objects appear to move. By breaking the motion of an object down into its component parts, an animator can control the movements of otherwise lifeless figures and images. Single-frame recordings of static images can create apparent motion when small changes in the positioning of objects occur between successive frames. Flat animation is accomplished with two-dimensional drawings and illustrations. One of the most common flat animation techniques is cel animation, in which an individual clear acetate cel is used for each frame. Plastic animation refers to the single-frame recording of three-dimensional figures and objects. Puppets, clay figures, miniature objects and vehicles, and even still frames of live action (a technique known as pixillation) can be animated. Three-dimensional figures are recorded using techniques that combine animation and live-action recording.

Computer animation generates images that can be recorded and stored as single frames on disk. Although 2-D computer animation continues to serve certain functions and even full-length features, 3-D computer animation has slowly become the creative leader in feature films and in television shorts. The greatest advantages of computer animation are speed and accuracy. Images can be immediately viewed as well as accurately recorded and rerecorded. Some computers interpolate the in-between frames if the animator simply composes the first and last frames of a sequence. A computer can also be used to interpolate the changes in two-dimensional or three-dimensional objects. Three-dimensional computer animation can be combined with live-action photography, opening up a whole new world of illusion and abstract art to film and television audiences.

Special effects allow futuristic or historical worlds to come to life and dangerous actions and events to be simulated. Special effects can be divided into five basic categories: camera effects, optical effects, digital effects, models and miniatures, and physical effects. Camera effects include such features as fast and slow motion as well as single-frame (animation) recording. Film recording rates in excess of 24 fps, such as 32, 48, and 64 fps, create slow motion when the processed film is projected at the standard projection speed of 24 fps, and film recording rates less than 24 fps, such as 18 and 12 fps, create fast motion. In addition to varying the speed of the images, some film cameras allow fade-outs, fade-ins, superimpositions, and reverse motion to be created during initial recording. Video cameras can also provide built-in special effects controls, some of which produce effects that are similar to in-camera film effects, such as fades and slow motion.

Optical film effects include step printing, traveling mattes, and aerial image printing. A basic optical printer consists of a camera and a projector. The two machines face each other, and the lens of the camera is focused on the image from the projector. Step printing is often used to speed up a slow-moving sequence by recording every other frame of the original film. Aerial image photography combines optical printing and animation, using a film projector with an animation stand. Live-action images are projected from beneath predrawn cels, so that color titles or animated figures can be combined with live action.

Physical effects include wind, fog, smoke, rain, snow, fires, explosions, and gunshots. They require the guiding hand of a highly trained professional when their use can endanger the safety of the cast and crew. Physical effects, like other kinds of special effects, can significantly contribute to the emotional mood of a sequence and generate viewer interest and excitement.

EXERCISES

1.  Design a credit or title sequence for a specific production project. Determine how you can best use abstract graphic images and titles to introduce a production, or select live-action images on which titles can be keyed. Select a letter style or font that is consistent with the overall theme, message, and style of your project, and that creates an impression that reinforces the central theme of a drama or the central message of an informational program. It can reflect warmth or coldness, tension or relaxation, simply by virtue of the colors, lines, and shapes it presents. Your project will eventually be shown on a video screen, so be sure to use type sizes that are large enough for titles to be clearly legible.

2.  Arrange six items of different sizes and shapes in a pattern within a single frame. Develop the maximum Z-depth effect, and follow the rules of composition. Record the arrangement from several different angles to see what creates the greatest depth and at the same time shows the objects to the best advantage.

3.  Construct a storyboard for an animation project. Create frames for each shot that will appear in the completed sequence. Either draw each frame by hand, or use a computer graphics program to compose each one. Make sure that all camera and figure movements are relatively simple to reproduce using an animation stand or a computer animation program. Determine how many individual frames or changes of figure movement and motion from still-frame images will be required. A series of single film or video frames in which recorded objects or materials gradually change their spatial position within the frame are recorded individually and sequentially. When they are played back at normal speed (24 fps in film or 30 fps in video), they produce apparent motion.

4.  Animate cut out paper figures by placing them on an animation stand and moving them slightly between recordings of individual film or video frames. Vary the speed of movement and evaluate the results.

5.  Shoot a series of in-camera special effects, such as slow motion, fast motion, reverse motion, pixillation, fade-outs, fade-ins, and split screens.

6.  Digitize a short video sequence, divide it into separate clips, and then image process each clip using a variety of transitions, filters, and superimpositions, keys, or mattes using a digital nonlinear editing or special effects program.

Additional Readings

Arntson, Amy E., 2002. Graphic Design Basics, fourth ed. Wadsworth, Belmont, CA.

Bacher, Hans, 2008. Dream Worlds: Production Design for Animation, Focal Press, Boston.

Beauchamp, Robin, 2005. Designing Sound for Animation, Focal Press, Boston.

Beiman, Nancy, 2007. Prepare to Board: Creating Story and Characters for Animation Features and Shorts, Focal Press, Boston.

Birren, Faber, 2000. The Symbolism of Color, Citadel Press, Secaucus, NJ.

Bordwell, Dave, Thompson, Kristin, 2001. Film Art: An Introduction, sixth ed. McGraw-Hill, New York.

Corsaro, Sandro, 2002. The Flash Animator, New Riders, Indianapolis, IN.

Cotte, Olivier, 2007. Secrets of Oscar-Winning Animation: Behind he Scenes of 13 Classic Short Animations, Focal Press, Boston.

Fernandez, Ibis, 2002. Macromedia Flash Animation and Cartooning: A Creative Guide, McGraw-Hill, New York.

Fullerton, Tracy, 2008. Game Design Workshop: A Playcentric Approach to Creating Innovative Games, second ed. Focal Press, Boston.

Furniss, Maureen, 2008. The Animation Bible: A Practical Guide to the Art of Animating, from Flipbooks to Flash, Harry N. Abrams, New York.

Gahan, Andrew, 2008. 3ds Max Modeling for Games: Insider’s Guide to Game Character, Vehicle, and Environmental Modeling, Focal Press, Boston.

Gauthier, Jean-Marc, 2005. Building Interactive Worlds in 3D: Virtual Sets and Pre-visualization for Games, Film and the Web, Focal Press, Boston.

Gordon, Bob, Gordon, Maggie, 2002. The Complete Guide to Digital Graphic Design, Watson-Guptill, New York.

Graham, Lisa, 2001. Basics of Design: Layout and Typography for the Beginners, Delmar, Albany, NY.

Griffin, Hedley, 2001. The Animator’s Guide to 2-D Animation, Focal Press, Boston.

Hart, John, 2008. The Art of the Storyboard: A Filmmakers Introduction, second ed. Focal Press, Boston.

Hoffer, Thomas W., 1981. Animation: A Reference Guide, Greenwood Press, Westport, CT.

Horton, Steve, Yang, Jeung Mo, 2008. Professional Mange: Digital Storytelling with Manga Studio EX, Focal Press, Boston.

Kennel, Glenn, 2007. Color Mastering for Digital Cinema, Focal Press, Boston.

Kerlow, Isaac V., 2004. The Art of 3-D Computer Animation and Effects, John Wiley & Sons, Hoboken, NJ.

Kitagawa, Midori, Windsor, Brian, 2008. MoCap for Artists: Workflow and Techniques for Motion Capture, Focal Press, Boston.

Krasner, Jon, 2008. Motion Graphic Design: Applied History and Aesthetics, second ed. Focal Press, Boston.

Kuperberg, Marcia, 2002. A Guide to Computer Animation for TV, Games, Multimedia, and the Web, Focal Press, Boston.

Kuppers, Harald, 1990. Basic Law of Color Theory, second ed. Barrons, Hauppauge, NY.

Landa, Robin, 2000. Graphic Design Solutions, second ed. Onward Press, Albany, NY.

Lester, Paul Martin, 2000. Visual Communication: Images with Messages, Wadsworth, Belmont, CA.

Mack, Steve, Rayburn, Dan, 2005. Hands-On Guide to Webcasting: Internet Event and AV Production, Focal Press, Boston.

Mattesi, Mike, 2008. Force: Character Design from Life Drawing, third ed. Focal Press, Boston.

McCarthy, Robert E., 1992. Secrets of Hollywood Special Effects, Focal Press, Woburn, MA.

Meyer, Chris, Meyer, Trish, 2008. Creating Motion Graphics with After Effects: Essential and Advanced Techniques, Focal Press, Boston.

Michael, Alex, 2006. Animating in Flash 8: Creative Animation Techniques, Focal Press, Boston.

Miller, Dan, 1990. Cinema Secrets, Special Effects, Apple Press, London.

Mitchell, Mitch, 2004. Visual Effects for Film and Television, Focal Press, Boston.

NFGMan. 2006. Character Design for Mobile Devices, Focal Press, Boston.

Olson, Robert, 1998. Art Director: Film and Video, second ed. Focal Press, Boston.

Patnode, Jason, 2008. Character Modeling with Maya and ZBrush: Professional Modeling Techniques, Focal Press, Boston.

Pender, Ken, 1998. Digital Colour in Graphic Design, Focal Press, Boston.

Purves, Barry J.C., 2007. Stop Motion: Passion, Process and Performance, Focal Press, Boston.

Richter, Stefan, Ozer, Jan, 2008. Hands-on Guide to Flash Video: Web Video and Flash Media Server, Focal Press, Boston.

Rickitt, Richard, 2006. Designing Movie Creatures and Characters: Behind the Scenes with the Movie Masters, Focal Press, Boston.

Roberts, Steve, 2007. Character Animation: 2D Skills for Better 3D, Focal Press, Boston.

Sawicki, Mark, 2007. Filming the Fantastic: A Guide to Visual Effects Cinematography, Focal Press, Boston.

Shaw, Susannah, 2008. Stop Motion: Craft Skills for Model Animation, second ed. Focal Press, Boston.

Simons, Mark, 2000. Storyboards: Motion in Art, second ed. Focal Press, Boston.

Subotnick, Steven, 2003. Animation for the Home Digital Studio: Creation to Distribution, Focal Press, Boston.

Sullivan, Karen, et al., 2007. Ideas for Animated Shorts with DVD: Finding and Building Stories, Focal Press, Boston.

Whitaker, Harold, 2002. Timing for Animation, Focal Press, Boston.

White, Tony, 2006. Animation from Pencils to Pixels: Classical Techniques for the Digital Animator, Focal Press, Boston.

Wilkie, Bernard, 1996. Creating Special Effects for TV and Video, third ed. Focal Press, Boston.

Williams, Robin, 2003. The Non-Designer’s Design Book, second ed. Peachpit Press, Berkeley, CA.

Winder, Catherine, Dowlatabadi, Zara, 2001. Producing Animation, Focal Press, Boston.

Wright, Jean, 2005. Animation Writing and Development: From Script to Pitch, Focal Press, Boston.

Wright, Steve, 2008. Compositing Visual Effects: Essentials for the Aspiring Artist, Focal Press, Boston.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset