CHAPTER

5   Directing: Aesthetic Principles and Production Coordination

•  What are directing aesthetics approaches?

•  How do shots vary?

•  What does composition mean to a director?

•  How are shots combined into sequences and scenes?

•  How do single- and multiple-camera directing differ?

Introduction

Video and film directors are artists who can take a completed script and imaginatively transform it into exciting sounds and images. Directors creatively organize many facets of production to produce works of art. They know how and when to use different types of camera shots and have mastered the use of composition, image qualities, transition devices, and relations of time and space. Directors know when and how to use different types of sound and how to control sound and image interaction. They understand how to work with people, especially actors and various creative staff and crew members. Above all, they know how to tell good stories. By using all of their creative powers, directors are able to produce films and video programs that have lasting value.

A director may concentrate on a sound production—including radio drama, commercial, or documentary—as director of a soundtrack for a visual production or as a director of a music recording. In those cases, the responsibilities are similar to those of a visual director. Of course, the energy of the director is aimed at melding the voice, music, and sound effects all into a coherent whole matching the meaning of the script. Directing a sound production involves fewer people, less equipment, and, even with digital operations, far less complex operations than a visual production. In some cases, the term “producer” is substituted for “director” in sound productions, as many of the responsibilities of both directing and producing are combined under one person.

As with sound, multimedia, interactive, and animation productions, directors often accept multilevel responsibilities in addition to their primary functions as director. The multichannel nature of these productions requires a different mental set requiring concentration on more than the one aspect of simultaneous action and movement. Multiple directors in such productions often fulfill the multichannel responsibilities, as is often the case in feature length animated films and complex games.

Directors prepare a shooting script by indicating specific types of images and sounds to be recorded within each scene. Armed with a final shooting script, a director is ready to organize production. To record the different scenes and sequences described in the script, the director must organize the activities of many different people who are involved in production. The role of the director is quite different in multiple-camera versus single-camera productions. The director must be able to communicate precisely and quickly with cast and crew. Each person must be able to understand and follow a specific communication system and language as it is directed at him or her. This chapter covers each of these facets of the director’s responsibility in the production process. The director usually selects and organizes images and sounds according to one of the three basic aesthetic approaches introduced in Chapter 2, The Production Process: Analog and Digital Technologies.

AESTHETIC APPROACHES

A convenient way to organize aesthetics, or approaches to the creative process, is to use three very general categories: realism, modernism, and postmodernism. Most artistic approaches reflect one or more of these three aesthetic tendencies, which differ in their emphases on function, form, and content. Function refers to why something is expressed: its goal or purpose. Form can be thought of as how something is expressed in a work of art. Content refers to what is expressed. Function, form, and content are closely connected aspects of any creative work.

Aesthetics

•  Function—Why

•  Form—How

•  Content—What

•  Realism—Content over form

•  Modernism—Form over function

•  Postmodernism—Audience’s involvement over artist’s form and content

Realism

Realism stresses content more than form. In realist works, artists use forms and techniques that do not call attention to themselves, or a so-called transparent style. Realist artists depict a world of common experience as naturally as possible. Smooth, continuous camera movements and actions, continuity of time and place, and the use of actual locations and real people (i.e., nonactors) help to sustain a sense of reality. Realist art relies on conventions that some artists and viewers believe will preserve an illusion of reality. Although realist techniques and conventions change, as in the shift from black-and-white to color images for added realism in photography, film, and television during the 1950s and 1960s, the mimetic tradition of art and literature imitating reality and the intent to preserve an illusion of reality in Western art has persisted over time. A realist artist is a selector and organizer of common experience, rather than a self-conscious manipulator of abstract forms, principles, and ideas.

Many prime-time network television dramas, such as CSI and Law and Order, and nonfiction programs, such as 60 Minutes, 48 Hours, and 20/20, select and organize common experience as naturally as possible. Continuity of space and time is evident even in the titles of some of these programs, such as 48 Hours. Forms and techniques rarely call attention to themselves. Instead a transparent but dramatic style helps to depict worlds of common experience and sustain an illusion of reality.

Modernism

Modernism stresses the idea that form is more important than function. Creators of avant-garde works of video and film art explore their medium beyond the usual restrictions and limitations of a realist approach without considering the illusion of reality. A modernist director’s works show less objectivity, tend to explore feelings of ambiguity, and may lack continuity in space and time. Many music video productions and some science fiction programs may be classified as modernist.

Some European feature film directors, such as Ingmar Bergman in Sweden and Luis Bunuel in Spain and France, have used modernist aesthetics to guide their approaches to filmmaking. Bergman’s film Persona (1966), for example, offers a collage of images that reflect the psychological states of mind of an actress who refuses to speak and a nurse who is trying to take care of her both inside and outside a Swedish mental hospital. Their personalities and faces seem to merge during the course of the film. The editing of this film and the world that Bergman depicts often conform more closely to internal mental states than they do to an external illusion of physical reality. Space and time are often discontinuous. Luis Bunuel’s early surrealist avant-garde film Un Chien Andalou (An Andalusian Dog, 1929), which he co-directed with the surrealist painter Salvador Dali, and his later narrative feature films, such as The Discreet Charm of the Bourgeoisie (1973) and Tristana (1970) or Viridiana (1961), often defy logic and rational thought as well as continuity in space and time. The surrealist world that Bunuel depicts allows the irrational thoughts and unconscious feelings and desires of his characters to be freely exposed at the same time that it makes a satirical comment on social conventions and institutional religious practices. Ingmar Bergman and Luis Bunuel are strongly personal, modern artists who sometimes stress style more than content and explore feelings of ambiguity and interior states of mind in their films rather than present an external illusion of reality.

Postmodernism

Postmodernism stresses viewer participation within open-ended works with vaguely defined characteristics. A scattered blending or pastiche of new and old images, genres, and production techniques may intentionally confuse the audience, yet at the same time attempt to emotionally and sometimes interactively involve viewers or listeners in the creation of texts rather than treat them as passive consumers of entertainment. Film and video directing and production in the postmodernist mode continue to evolve, and their precise definitions remain somewhat elusive.

An example of a postmodernist work is Peter Gabriel’s CD-ROM titled Explora 1 Peter Gabriel’s Secret World (1993), which was developed and directed by Peter Gabriel, Steve Nelson, Michael Coulson, Nichola Bruce, and Mic Large. This interactive CD-ROM contains Peter Gabriel’s music and music videos as well as minidocumentaries about the artist and the production of his works, including information about performing artists with whom Gabriel has collaborated as well as other visual artists. Viewers and listeners control the sequence and duration in which the entertainment and information contained on this CD-ROM are presented in the “INTERACT” mode, whereas the “WATCH” mode takes them on a guided journey through the disk. Viewers and listeners have to correctly put together an image of Peter Gabriel to gain entry and to select different worlds to explore and different areas with which to interact. In one section, viewers and listeners can even add or subtract different musicians and control the sound levels of the audio mix for a selection from Peter Gabriel’s music during an interactive recording session. The ways in which this CD-ROM combines animation, live action, and documentary recordings, information, and entertainment and the “INTERACT” and “WATCH” modes of presentation clearly offer a postmodernist approach to directing that directly involves the viewer and listener in the creative process.

Realism, modernism, and postmodernism are not mutually exclusive, nor do they exhaust all aesthetic possibilities, but they offer a convenient means of organizing the field of aesthetics from the standpoint of production. The relation of expressive forms and techniques to program content and purposes often reflects these three general tendencies. They are applicable to all the aspects of production that will be covered in the following sections, including visualization, lighting, and set design, as well as postproduction editing.

VISUALIZATION

The director decides what types of pictures should be used to tell the story specified in a script by considering the choices available. The visualization process includes an analysis of the types of shots possible, composing those shots, and deciding how to combine the shots visually and with the proper sounds into a comprehensive whole. The visualization process follows in interactive, animation, and multimedia productions as in linear film or TV production.

A director’s ability to select and control visual images begins with an understanding of specific types of shots. The camera can be close to or far away from the subject. It can remain stationary or move during a shot. The shots commonly used in video and film production can be described in terms of camera-to-subject distance, camera angle, camera (or lens) movement, and shot duration.

Types of Shots

Long Shot (LS)

The long shot orients the audience to subjects, objects, and settings by viewing them from a distance; this term is sometimes used synonymously with the terms establishing shot, wide shot, or full shot. An establishing shot (ES) generally locates the camera at a sufficient distance to establish the setting. Place and time are clearly depicted. A full shot (FS) provides a full frame (head-to-toe) view of a human subject or subjects (Figure 5.1). Extreme long shots in production for micro displays such as cell phones and iPods are not as effective because the small detail will be lost on the small-low resolution screen.

FIGURE 5.1 A long shot (LS) may refer to the framing of a human figure from head to foot, or the longest shot in the sequence.

image

Medium Shot (MS)

A medium shot provides approximately a three-quarter (knee-to-head) view of the subject. The extremes in terms of camera-to-subject distance within this type of shot are sometimes referred to as a medium long shot (MLS) and a medium close-up (MCU) (Figure 5.2). The terms two-shot and three-shot define medium shots in which two or three subjects, respectively, appear in the same frame.

FIGURE 5.2 A medium shot (MS) may refer to the framing of a human from the head to just below the knees, or a shot framing two persons, sometimes called a two-shot.

image

Close Shot (CS) or Close-Up (CU)

The terms close shot and close-up are often used synonymously. A close-up refers to the isolation of elements in the shot and normally indicates the head and shoulders of a person. When someone is making an important or revealing statement or facial gesture, a close-up will draw the audience’s attention to that event. Close-ups focus and direct attention and create dramatic emphasis. When they are overused, however, their dramatic impact is severely reduced. A very close camera position is sometimes called an extreme close-up (ECU). See Figures 5.1, 5.2, and 5.3 for illustrations of long, medium, and close-up shots, respectively. There are times when the standard nomenclature of framing is not appropriate. If the widest shot in a program is from a blimp and the tightest shot is of one football player, then the blimp shot would be the LS and the player shot would be a CU. Conversely, if the entire commercial is shot in close-ups, the tightest shot would be an ECU, and the widest shot would be an LS, even if it were only a shot of a hand holding a product.

FIGURE 5.3 A close-up (CU) may refer to the framing of a person from the top of the head to just below the neckline, or if framing an object, filling the frame with the object.

image

Camera Angle

The camera angle is frequently used to establish a specific viewpoint, such as to involve the audience in sharing a particular character’s perspective on the action. The goal may be to enhance identification with that person’s psychological or philosophical point of view (Figures 5.6, 5.7, 5.8).

FIGURE 5.4 A point-of-view (POV) refers to framing as if the observer were viewing from inside the camera or if the camera lens represented what a character would be viewing from his or her position in the set.

image

FIGURE 5.5 An over-the-shoulder (OS) refers to a two-shot from behind one of the subjects who is facing the other subject. Generally the framing is a medium shot.

image

FIGURE 5.6 A low-angle shot is the view from a camera positioned well below the eye level of the subject looking up at the subject.

image

FIGURE 5.7 A high-angle shot is the view from a camera positioned well above the eye level of the subject looking down on the subject.

image

FIGURE 5.8 Among many other means of mounting either film or video cameras is the gyroscope mount, which can be mounted on an airplane, helicopter, or any moving vehicle. The stabilizing system provides a solid, smooth picture, and the operator inside the vehicle can pan, tilt, and zoom the camera as the shot requires.

image

Point-of-View Shot (POV Shot)

A point-of-view shot places the camera in the approximate spatial positioning of a specific character. It is often preceded by a shot of a character looking in a particular direction, which establishes the character’s spatial point of view within the setting, followed by a shot of that same character’s reaction to what he or she has seen. The latter shot is sometimes called a reaction shot. A closely related shot is the over-the-shoulder shot (OS). The camera is positioned so that the shoulder of one subject appears in the foreground and the face or body of another is in the background. Another variation on the point-of-view shot is the subjective shot, which shows us what the person is looking at or thinking about. Like point-of-view shots, subjective shots offer a nonobjective viewpoint on actions and events and can enhance audience identification with more subjective points of view (Figures 5.4 and 5.5).

Reverse-Angle Shot

A reverse-angle shot places the camera in exactly the opposite direction of the previous shot. The camera is moved in a 180-degree arc from the shot immediately preceding it.

Stationary versus Mobile Camera Shots

An objectively recorded scene in a drama establishes a point of view that conforms to the audience’s main focus of interest in the unfolding events. This objective placement of cameras can still be quite varied. A director can use a continuously moving camera gliding through the scene to follow the key action. This approach establishes a point of view that is quite different from recording a scene from several stationary camera positions. Both approaches can be objective in the sense that neither attempts to present a specific person’s point of view, although a moving camera creates a greater feeling of participation and involvement as the audience moves through the setting with the camera.

A moving camera adds new information to the frame and often alters spatial perspective. A moving camera shot can maintain viewer interest for a longer period of time than a stationary camera shot. But a moving camera shot can also create difficulties. It is often difficult to cut from a moving camera shot to a stationary camera shot. The camera should be held still for a moment at the beginning and end of a moving camera shot so that it can easily be intercut with other shots. One moving camera shot can follow another so long as the direction and speed of movement remain the same. Both moving the camera and cutting from one stationary camera shot to another can give us a spatial impression of the setting from a variety of perspectives, but the former generates feelings of smoothness and relaxation, and the latter creates an impression of roughness and tension, which can be used effectively to stimulate a feeling of disorientation, as in the film Natural Born Killers (1994). Many types of mobile camera shots can be recorded with the camera remaining in a relatively fixed position. Depending on the type of digital camera and compression used in the camera, artifacts may be created by rapid movement of the camera.

Pan Shot

A camera can be panned by simply pivoting it from side to side on a fixed tripod or panning device. This shot is often used to follow action without having to move the camera from its fixed floor position. A pan is always a horizontal movement.

Tilt Shot

A camera tilt is accomplished by moving the camera up and down on a swivel or tilting device. This shot is also used to follow action, such as a person standing up or sitting down. It can also be used to follow and accentuate the apparent height of a building, object, or person. A tilt is always a vertical movement.

Pedestal Shot

A camera can be physically moved up and down on a pedestal dolly. A hydraulic lift moves the camera vertically up and down within the shot, such as when a performer gets up from a chair or sits down. A pedestal shot allows the camera to remain consistently at the same height as the performer, unlike a tilt shot, where the camera height usually remains unchanged. Pedestal shots are rare, but a pedestal is often used to adjust the height of the camera between shots (Figure 5.9).

FIGURE 5.9 A studio pedestal is designed to allow the heavy weight of a studio camera, lens, and prompter to be moved about the studio with relative ease. The direction of the wheels under the skirt of the pedestal can be changed to allow the camera to be guided in any direction. The vertical pedestal column is counterweighted or controlled by compressed air to allow the operator to raise or lower the camera easily and smoothly. The pan head is mounted on the top of the pedestal column. (Courtesy of KCTS 9, Seattle.)

image

Zoom Shot

A zoom can be effected by changing the focal length of a variable focal-length lens in midshot. A zoom shot differs from a dolly shot in that a dolly shot alters spatial perspective by actually changing the spatial positioning of objects within the frame. During a zoom shot, the apparent distance between objects appears to change because objects are enlarged or contracted in size at different rates. During a zoom-in, objects appear to get closer together, and during a zoom-out, they seem to get farther apart. Other types of mobile camera shots require camera supports that can be physically moved about the studio.

Dolly Shot

A dolly shot is a shot in which the camera moves toward or away from the subject while secured to a movable platform on wheels. It is often needed to follow long or complicated movements of performers or to bring us gradually closer to or farther away from a person or object.

Trucking Shot

In a trucking shot, the camera is moved laterally (from side to side) on a wheeled dolly. The camera may truck with a moving subject to keep it in frame. If the dolly moves in a semicircular direction, the shot is sometimes referred to as an arc or camera arc.

Tracking Shot

A tracking shot uses tracks laid over rough surfaces to provide a means of making smooth camera moves in otherwise impossible locations.

Crane or Boom Shot

The camera can be secured to a crane or boom so that it can be raised and lowered or moved from side to side on a pivoting arm. This type of shot can create a dramatic effect when it places the subject in the context of a large interior space or a broad exterior vista.

COMPOSITION

Composition is a term used by painters, graphic artists, and still photographers to define the way in which images can be effectively structured within a frame. Frame dimensions or the aspect ratio of the specific media format affect the composition. Two basic principles of composition that will be discussed in Chapter 11 are symmetry and closure. Composition is complicated by the fact that video and film images move in time. Therefore, composition is constantly changing.

Aspect Ratio

A frame limits the outer borders of the image to specific dimensions. The ratio of these dimensions—that is, the ratio of a frame’s width to its height—is called the aspect ratio of the frame. Composition is obviously slightly different for different aspect ratios. If you were to put identical paintings in frames with different dimensions and aspect ratios, for example, the paintings would look very different: the relations between the shapes and objects or the composition within the frames would not be the same. American analog video, standard digital video (SD), Super-8 mm, 16 mm, and standard 35 mm film all have the same aspect ratio: 4:3 or 1.33:1. But feature films in Super- 16 mm, 35 mm, and 65 mm, which are made for wide-screen projection in theaters, have aspect ratios that vary from 1.85:1 to 2.35:1.

High-definition TV (HDTV) is set at 16 × 9, or 1.78:1, which closely approximates the 1.85:1 academy aperture feature film format or aspect ratio. Wide-screen images can enhance an illusion of reality by involving more of our peripheral or edge vision, but they also alter the aesthetics of object placement and composition within the frame. Consider the different impressions created by a wide gulf between two characters on a wide-screen frame and the greater proximity of two characters in a video frame. It is difficult to copy or transfer visuals from one aspect ratio to another intact, as in copying magazine photographs with a video camera or showing a wide-screen film on television (Figure 5.10).

FIGURE 5.10 The aspect ratio is the ratio of the width (X-axis) of a frame to the height (Y-axis). The NTSC television and traditional film ratio is 4 units × 3 units. The HDTV aspect ratio is 16 units × 9 units, and wide-screen films range from 16 × 9 to 2 × 1. The critical or essential area is considered to be the portion within the frame outlined by a 10 percent border of the full scanned area. This is true of both 4 × 3 and 16 × 9 aspect ratios.

image

Essential Area

An important factor in terms of frame dimensions is the concept of essential area. The full video or film camera frame is rarely, if ever, viewed in its entirety. Part of the border or edge of the full frame is cut off during transmission and conversion in the home receiver. Essential or critical area refers to the portion of the full frame that will actually be viewed. All key information, actions, and movements must be safely kept within this essential area (Figure 5.10).

Rule of Thirds

One well-practiced theory of composition involves dividing the frame into thirds, both horizontally and vertically. If you mentally draw two vertical and two horizontal lines that divide the frame into thirds, objects can then be arranged along the lines. Important objects may be placed at the points where these lines intersect for added interest or emphasis. Following the rule of thirds allows a picture to be quickly comprehended in an aesthetically pleasing way. Placing subjects in this manner is more interesting than simply bisecting the frame. Other slightly more complicated forms of visual composition can also be used with success, but they are not always comprehended so quickly and easily. The framing composition changes radically with a 16:9 aspect ratio when the rule of thirds is applied. The vertical area remains the same, but much more space needs to be filled in the horizontal areas. Those areas increase the possibilities of using many more multiple images in a single frame than in a 4:3 ratio (Figure 5.11).

FIGURE 5.11 The rule of thirds divides the frame into nine areas by drawing two vertical lines one-third of the way in from each side and two horizontal lines one-third of the way from the bottom and the top of the frame.

image

Symmetry

Symmetry is an important aesthetic principle of composition in any two-dimensional, framed visual medium. A director can create a symmetrical or balanced spatial pattern by using objects in the frame. A symmetrical frame appears stable and solid, but it is eventually uninteresting and boring as well. An asymmetrically or unbalanced frame is more volatile and interesting but can also be extremely distracting. When properly used, both symmetrically and asymmetrically organized frames can be pleasing and effective.

The key is to know when it is appropriate to use one form of composition rather than the other. Framing the head of one person talking directly into the camera in an asymmetrical pattern can be distracting. The audience’s attention is supposed to focus on the spokesperson, but it is distracted by the lack of balance in the frame. An asymmetrical image of one or more people in the frame can suggest that someone or something is missing. The entrance of another person or character then balances the frame (Figures 5.12 and 5.13).

FIGURE 5.12 Symmetrical balance in composing the objects within a frame shows exactly the same items on each side of a line drawn down the middle of the frame.

image

FIGURE 5.13 Asymmetrical framing can vary from an equal weight of objects on each side of the frame to totally unbalanced weight, as well as asymmetrical groupings of objects.

image

An asymmetrical frame can suggest that something is wrong or that the world is out of balance. The concept of symmetry must be integrated with the rule of thirds and other concepts, such as lookspace, walkspace, and headroom. Lookspace refers to the additional space remaining in the frame in the direction of a performer’s look or glance at something or someone outside the frame. Walkspace is the additional space in the frame remaining in front of a moving performer. When following the rule of thirds, the performer’s face (in the case of a look or glance) or the performer’s body (in the case of a walk or run) is placed on one of the trisecting vertical lines, leaving two thirds of the remaining space in the direction of the glance or movement. This asymmetrical composition is much better than having the performer in the exact center of the frame (Figure 5.15). Headroom refers to the space remaining in the frame above the subject’s head, which is most pleasing visually when there is a slight gap between the top of the head and the top of the frame.

FIGURE 5.14 Logical cutoff points to keep in mind when framing subjects should not fall at the joints of the body. To allow for closure and the assumption that the body continues beyond the cutoff point, camera operators must frame the subject between the joints, that is, between the ankle and knee, or between the waist and breast.

image

FIGURE 5.15 Any object, either moving in the frame or facing in an obvious direction, needs room to “move” and “look” within the frame. A common framing practice is to place such subjects on either of the lines splitting the frame into nine sections.

image

Closure

The concept of lookspace is related to another aspect of visual composition called closure. On-screen space—that is, space within the frame—often suggests continuity with off-screen space. An open frame suggests that on-screen space and objects continue into off-screen space. A completely closed frame, on the other hand, gives the illusion of being self-contained and complete in itself.

The way in which an image is framed and objects are arranged can create a sense of closure or a sense of openness. Symmetrically framing a performer’s head in the center of the frame creates a sense of closure. The composition does not allude to parts of the body that are missing off-screen. Framing body parts between normal joints of an arm, leg, or waist, on the other hand, suggests continuity in off-screen space. Something appears to be missing, although our memories readily fill in the missing parts (Figure 5.14).

Depth and Perspective

Screen composition can enhance an illusion of depth and three-dimensionality. Lighting can add depth to the image by helping to separate foreground objects from their backgrounds. Placing the camera at an angle so that two sides of an object are visible at the same time creates three-dimensionality. Including foreground objects in a frame can enhance the illusion of depth by setting a yardstick by which the distance, size, and scale of the background can be determined. A person, tree branch, or object of known scale in the foreground can set a context for depth. Diagonal or parallel lines, such as those of a railroad track, can guide the eye to important objects in the frame and create a greater illusion of depth. Placing objects or people at several different planes of action within the frame, or creating frames within frames, such as a person standing inside a doorway, increases the perception of depth within the frame.

Of course, a certain degree of care must be exercised when using multiple planes of action so that two planes do not unintentionally connect to create one confused plane, as when a plant in the background appears to be growing out of a person’s head. Image perspective refers to the apparent depth of the image and the spatial positioning of objects in different planes. The type of lens that is used can affect perspective. Telephoto or long focal-length lenses often seem to reduce apparent depth, whereas wide-angle or short focal-length lenses seem to expand space and apparent depth. Lenses help an image look deep or shallow. A moving camera, as in a dolly shot, can also affect the apparent depth and perspective by changing the relationship between objects in the frame. Cutting from one camera angle to another can help create an illusion of three-dimensionality out of two-dimensional video and film images.

Frame Movement

A moving frame changes visual composition. In video and film, composition is constantly in flux because of camera or subject movement. In this respect, film and video are quite unlike photography and painting, which present motionless images. One type of composition can quickly change to its opposite. A symmetrical frame can quickly become asymmetrical, or an open frame can appear closed. The illusion of depth can be enhanced by the movement of a camera or of objects within the frame. Objects that move toward or away from the camera naturally create a greater sense of depth than those that move laterally with respect to the camera. Diagonal lines of movement, such as diagonal lines within a static frame, add dynamism and force to the composition. A canted frame is created by tilting the camera to the left or right. This adds a sense of dynamic strength to an image, such as an exciting shot within a car chase, but a canted frame used in less intense action sequences often looks out-of-place.

Image Qualities

A director must be conscious of subtle differences in image tonality, especially when editing or combining images. Image tonality refers to the overall appearance of the image in terms of contrast (gradations of brightness from white to black) and color. Lighting and recording materials can affect image contrast. Combining two shots that have very different contrast levels can be disconcerting to the viewer, but it can also arouse attention. A high-contrast scene—that is, one that has a limited range of gray tones with mostly dark black and bright whites—will look quite different from a low-contrast scene, which has a wide range of intermediate tones. Matching image tonalities in terms of contrast and color can help effect smooth transitions from shot to shot and scene to scene. Combining mismatched tones can have a shock or attention-getting value. Excessive contrast is a common problem in video production, especially field production, where outdoor lighting is difficult to control. High contrast is sometimes more of a problem in video than in film, because of the narrower range of contrasting shades or tonalities that video can record, but it is an important consideration in both media.

Scale and Shape

Scale refers to the apparent size of objects within the frame. Camera-to-subject distance, camera angle, and the type of lens used can affect the apparent size of objects. Lower camera positions and angles sometimes increase the apparent size of an object in the frame. The apparent size of an object can increase or decrease its importance.

Directors can create a balanced and symmetrical frame by arranging objects of equivalent size or similar shape in different parts of the same frame. Graphic similarities, such as similarities in the shape or color of objects, can create smooth transitions between shots. Graphic differences can be used to create an asymmetrical frame or to emphasize transitions from one shot to another.

Speed of Motion

Images can have different speeds of motion. Speed of motion refers to the speed at which objects appear to move within the frame. This speed can be changed by altering the film recording speed or the video playback speed to produce fast motion or slow motion. Editing many short-duration shots together can enhance the speed of motion, whereas using fewer shots of longer duration can help slow down actions and the speed of motion. The pace of editing is called editing tempo. Camera placement, lenses, and the actual motion of the photographed objects also affect the apparent motion of objects. A long focal-length lens often slows down apparent motion by squashing space, whereas a wide-angle lens can speed up motion by expanding the apparent distance traveled in a given period of time.

COMBINING SHOTS

One of the director’s key jobs, which is shared by the editor during postproduction, is to determine the precise duration of each shot. An exposition section may call for a number of long takes that slow down the action and allow the audience to contemplate character, situation, and setting. A dramatic climax, on the other hand, may call for many different short-duration shots, which help intensify the action. The famous three-minute shower-scene murder in Alfred Hitchcock’s Psycho (1960), for example, is made up of more than 100 separate pieces of film cut together to intensify the action. Modernist film aestheticians, such as Sergei Eisenstein, have sometimes advocated the use of many short-duration shots, whereas realist aestheticians, such as André Bazin, have often recommended the use of longer-duration shots.

A good director is usually a good editor; that is, directors know how and when to combine specific images. Editing begins with an understanding of composition, image qualities, and different types of shots. Shots can be combined using a variety of transition devices, including straight cuts, fades, dissolves, wipes, and digital transitions.

Straight Cut or Take

A straight cut or take is a direct, instantaneous change from one camera shot to another, say from a long shot of a scene to a close-up of a performer’s face. Time is assumed to be continuous over a straight cut, except in the case of jump cuts, where actions are discontinuous and do not match from one shot to the next, suggesting a gap in time. If a cut is made from a shot of a person talking to someone on one side of a room to a second shot showing the same person talking to someone else on the opposite side, the result is a jump cut. Jump cuts are widely used in commercials, in which stories are condensed to 30 seconds by using rapid editing tempos, and in documentary and news interviews, where this procedure is sometimes considered more honest than using cutaways to mask deletions. It is becoming more and more common to use jump cuts to compress time in fiction as well.

Fade

The picture of a video program or film can fade in from blackness to image or fade out from image to blackness. A fade-out followed by a fade-in usually indicates a significant passage of time. A fade, like a curtain on the stage, can be used to mark the beginning and the end of a performance and to separate acts or scenes (Figure 5.16).

FIGURE 5.16 A fade to black shows the image slowly darkening until it disappears. A fade from black starts with a totally blank frame and then the first image slowly. Frame A shows an image on both halves in full view. Frame B shows the left side of the frame beginning to fade to black. Frame C shows the left side of the frame nearly faded all the way to black, and Frame D shows the left side of the frame completely faded to black.

FIGURE 5.16 The right side of the frame has remained at full level in each frame.

image

Dissolve

A dissolve is actually a simultaneous fade-out and fade-in. One scene or shot fades out at the same time that another shot fades in to replace it. For a very short duration, the two shots are superimposed on one another. Dissolves are frequently used to conceal or smooth over gaps in time rather than emphasizing them as in a fade-out and fade-in. A very rapid dissolve is sometimes called a soft cut or lap dissolve.

Wipe

A wipe is a transition device created on a switcher, a special effects generator, or an optical film bench whereby one image or shot is gradually replaced on the screen by another. A wipe may begin on one side of the screen and move across to engulf the opposite side. It can also begin in the middle of the frame and move outward. Ending one shot by dollying or zooming in to a black object that fills the frame and beginning the next shot by dollying or zooming out from a black object is sometimes called a natural wipe (Figure 5.17).

FIGURE 5.17 A wipe appears as one image is replaced by another with a straight line separating the two images. A wide number of different patterns separating the two images also may be used.

image

Defocus

Placing one image out of focus and gradually bringing a replacement image into focus is called a defocus transition.

Swish Pan

A rapid movement of the camera on the tripod swivel or panning head causes a blurring of the image, which can be used as a swish pan transition from one scene to another. This transition is frequently accompanied by up-tempo music, which accelerates the sense of action and movement rather than creating a pause.

Special Effects

Split Screen or Shared Screen

Having one image occupy a portion of the same frame with another image is called a shared screen. When the frame is split into two equal parts by the two images, it is called a split screen. Sometimes these techniques make it possible to show two different but simultaneous actions on the same screen.

Superimposition

Having two different shots occupy the same complete frame simultaneously is called a superimposition. One shot is usually dominant over the other to avoid visual confusion. The superimposed images should not be excessively detailed or busy. In effect, a superimposition looks like a dissolve that has been stopped while in progress. Combining a long shot and close-up of the same person from different angles sometimes creates an effective superimposition (Figure 5.18).

FIGURE 5.18 A superimposition is a combination of two images created by stopping a dissolve at midpoint. Depending on the intensity of each image, part of one will bleed through the other.

image

Keying and Chroma Key

A specific portion of a video image can be completely replaced with a second image using keying or chroma key techniques. Titles and graphics can be inserted into a portion of another image. A scene from a still photograph or slide can be inserted into a blue- or green-colored area (e.g., a green or blue screen on the set) in a shot using video chroma key. The monochrome blue or green portion of the latter shot is replaced with the inserted shot.

Matte and Blue Screen

A matte is used in film to black out an area in one image that will then be filled in with a second image. Matting is to film what keying is to video. Blue screening in film is equivalent to chroma key in video, because the blue screen area in one image is replaced by a second image.

Negative Image

A normal visual image is positive. A negative image reverses the brightness and darkness of the original image. Blacks become whites and whites become blacks. Colors turn into their complements. In television, simply reversing the polarity of the electrical picture signal can do this. In a film, a negative print can be made from a positive image.

Freeze Frame

A freeze frame is a continuing still image from one frame of a video or film shot created during postproduction. Usually the action stops by freezing the last frame of a shot, such as at the conclusion of a film or video program.

Digital Transitions

A wide variety of effects now can be created with a digital effects generator. Page turns, shots on each side of rotating blocks, a subject morphing into another subject, shots disintegrating into another shot, plus virtually any transition imaginable are all possible through the use of digital switchers and effects. The specific language and naming of digital transitions remains in flux as the industry attempts to reach standards agreed upon among the many manufacturers of digital equipment. A caution for new directors: Use a special effect only when needed, not just because the equipment is capable of performing such effects. A special effect is not special if overused (Figure 5.19).

FIGURE 5.19 A digital effect is any one of a number of transitions, such as page turns, unique patterns, or three-dimensional transitions.

image

Scene Construction

A scene is a series of shots of action occurring in continuous time at one place. It is important to ensure that significant changes in camera angle or camera-to-subject distance occur between two successive shots within a scene. The camera angle should change at least 45 degrees with respect to the subject from one shot to the next, unless there is a significant change in camera-to-subject distance. A few aesthetic reasons for making a cut that involves a change of camera-to-subject distance are (1) to depict an action that was omitted in the previous shot; (2) to provide a closer look at an event or object; (3) to emphasize an object or action; and (4) to draw back and establish the setting. A cut from a medium shot to a close-up provides a closer look at an object or event, whereas a cut from a long shot to a close-up emphasizes an object or action. Cutting from a medium shot to a long shot helps to reestablish the setting and place the action in context or in broader spatial perspective.

A conventionally constructed scene might begin with a long shot or establishing shot to place the subjects within a specific setting. Then the camera gets progressively closer to the subject as the action intensifies, and finally the camera pulls back to reestablish the setting at the conclusion of the scene. An alternative approach is to begin a scene with a close-up and gradually pull back from shot to shot to reveal more and more of the setting as the action progresses. The latter approach is initially somewhat confusing and spatially disorienting, but it also arouses viewer curiosity.

Certain types of cuts involve severe changes in camera-to-subject distance, such as those from long shot to close-up or vice versa. In realist situations, these dramatic changes of scale should be used sparingly and primarily for emphasis, because they often have a distracting effect on the audience. More gradual changes of scale are less disruptive and provide a smoother transition. A new shot or image should serve a purpose different from that of the previous shot. It can anticipate the audience’s next point of interest—that is, it can be psychologically motivated on the basis of viewer expectations. It can present additional or contrasting information by revealing actions that were hidden from a previous angle. In general, every shot should be cut as short as it can be without inhibiting its function. A good director separates essential from nonessential information to determine how long a specific shot will maintain viewer interest.

Continuity Editing

Continuity editing usually means creating a smooth flow from one shot to the next. Actions that begin in one shot are completed in the following shot with no apparent gaps in time. There is continuity in the spatial placement and the screen direction of moving and stationary objects from shot to shot. Conventional continuity can, of course, be disrupted in time and space. Gaps or jump cuts in the action can be consciously edited into a scene. Actions can be repeated over and over again, slowed down, and speeded up. But it is important to learn the basics of continuity editing before attempting to disrupt it. Beginning video and film directors need to first acquire some appreciation of the difficulty inherent in trying to maintain continuity and in meeting conventional viewer expectations.

Pace and Rhythm

The selection of long- and short-duration shots affects the pace or rhythm of a scene. A director must be sensitive to changes in pace and rhythm. To build a scene out of different shots, a director must match the tempo or rhythm of the editing to the subject matter and the audience’s expectations. Rapidly cutting together many short-duration shots for a how-to film about woodworking, for example, distracts the audience’s attention from the primary subject matter. Slow-paced editing for a soft drink commercial may be extremely boring and an ineffective persuasion technique. A fast-paced exposition and a slow-paced climax in a dramatic production usually fail to achieve the desired emotional effect and dramatic structure.

Compression and Expansion of Time

Directors can compress and expand time through editing, even while preserving the illusion of temporal continuity. For example, suppose that you wish to record the action of someone getting dressed and ready for work in the morning. A single shot of this activity that preserved exact temporal continuity might last 10 minutes or more in actual duration. But by recording different segments of action and editing them together, the essential elements of time activity can be preserved without creating any readily apparent gaps in time. How can this be done? Simply by cutting from a long shot of the action to a close-up of a hand or an object in the room, and then cutting back to a long shot in which the person is more completely dressed than could actually have occurred in the duration of the close-up.

A director can speed up an action by eliminating unimportant or repetitious actions between cuts. The action and time are condensed and compressed. The same technique can be used for someone crossing a street. For instance, we begin with a full shot or long shot of the person starting to step off one curb, then we cut to a medium shot and then a close-up of his or her feet or face. Finally, we present a long shot of the person reaching the other side of the street. This edited version of the street crossing might last just five seconds, although actually walking across the street takes more than 20 seconds. Condensing or compressing action can increase the pace and interest of actions. Actions can also be expanded through editing. An action can be shown, followed by the thoughts of one or more of the characters as the action occurs. In reality, the action and the thinking would have occurred simultaneously, but in a media production, each must be shown separately, lengthening the time it takes to depict the actual time of the incident.

Screen Directionality

Depicting a three-dimensional world in a two-dimensional medium presents the director with special problems of screen directionality. Screen directionality refers to the consistent direction of movements and object placement from one shot to the next. Inconsistent screen direction causes spatial confusion. What viewers actually see seems to contradict their expectations. This type of confusion can be effective in music videos and formative or modernist works of art. But, in general, maintaining directional consistency of looks and glances, object placements, and subject movements within the frame reduces viewer confusion by increasing spatial clarity in realist and functionalist works.

Directional Glances

It is important to record a consistent pattern of performers’ spatial looks and glances within the frame to preserve an illusion of reality. The improper placement of a camera can result in confusing inconsistencies (which again can be useful in a modernist approach). A close-up of one character looking screen left at a second character is usually followed by a shot of the other character looking screen right to suggest that he is looking back at the first character. When one person looks down at another person, the other should look up within the frame of the second shot, and so on. The camera must be placed and the image framed so that there is directional consistency from one shot to the next.

The 180-Degree Axis of Action Rule

The 180-degree rule of camera placement ensures directional consistency from shot to shot. An imaginary line can be drawn to connect stationary subjects. Once the camera is placed on one side or the other of this axis of action, all subsequent camera placements must occur on the same side of the line to prevent a reversal in the placement of objects in the frame (Figure 5.20).

FIGURE 5.20 A line drawn through the two main subjects in a scene divides the studio plot into two areas, one on each side of that line. Once shooting in that scene has started on one side of the line, the rest of the scene must be shot from that same side, unless a shot moves from one side to the other while recording.

image

A moving subject establishes a vector line, and all camera placements are made on one side of this line or the other to maintain consistent screen direction of movement. If the camera crossed this line, a subject going from left to right in one shot would appear to be going in the opposite direction in the next shot. There are ways to break the 180-degree rule without creating spatial confusion or disrupting an illusion of reality. First, the camera can move across the line during a single shot, establishing a new rule on the opposite side of the line to which all subsequent shots must conform. A director can also cut directly to the line itself by placing the camera along the line and then cross over the line in the next shot to establish a new rule. Finally, the subject can change direction with respect to the camera during a shot and thus establish a new line.

SOUND AND IMAGE INTERACTION

An overvaluation of visual images can lead directors to neglect accompanying sounds, but sound is an extremely important aspect of video and film production. Sound can complement and fill out the image. It can also conflict with corresponding images or produce an independent experience. Sound can shape the way in which images are interpreted. It can direct our attention to a specific part of the image or to things not included in the image. Some sounds and music have the ability to stimulate feelings directly. Sounds can create a realistic background or a unique, abstract, impressionistic world.

Sound and image relationships can be divided into four oppositional categories: (1) on-screen versus off-screen sounds, (2) commentative versus actual sounds, (3) synchronous versus asynchronous sounds, and (4) parallel versus contrapuntal sounds. Understanding each of these categories opens up a broad range of aesthetic possibilities. This section concludes with a separate consideration of combining music and visual images from two different standpoints: editing images to prerecorded music and composing original music for video and film.

Sound and Image Relationships

image

On-Screen versus Off-Screen Sound

A sound coming from a source that is visible within the frame is called an on-screen sound. Off-screen sounds come from sources assumed to be just outside the frame. The use of off-screen sounds can enhance spatial depth. Noel Burch, a media theorist, has pointed out that an off-screen sound can seem to come from six possible positions outside the frame: from the left, from the right, from above, from below, from behind the wall at the back of the frame, and from behind the camera. The precise spatial placement of an off-screen sound is not always discernible. Stereophonic or multichannel sound obviously helps us to determine the position of an off-screen sound, but the effect is the same. Our attention is directed off-screen to the source of the sound, particularly if on-screen performers are looking in the appropriate direction. By arousing our curiosity, off-screen sound can set up an expectation of the visual presentation of its source. It can also break down some of the limitations of a visual frame, opening it up as a realistic window on the world, as opposed to a more abstract, self-contained, modernist aesthetic world.

Commentative versus Actual Sound

Sound and image relations also can be classified on the basis of the supposed actuality or artificiality of their sound sources. Commentative sound has no known source, whereas actual sound is presumed to come from some actual or real sound source either inside or just outside the frame. Spoken dialogue is usually actual sound. Narration is commentative sound, unless the narrator appears on-screen. Music can be either commentative or actual sound. Scoring is commentative sound, and source music is actual sound. Commentative sound effects, such as shrill metallic sounds that have no readily apparent source, can help to create an impressionistic, emotionally charged atmosphere. Commentative music, narration, and sound effects can be effectively used to reinforce specific feelings. Lush, romantic music, for example, might complement a romantic scene, such as the reunion of long-separated lovers, although such conventions easily become musical clichés.

Synchronous versus Asynchronous Sound

Synchronous sounds match their on-screen sources. Lip-sync sounds synchronize with the lip movements of the on-screen speaker. Sound effects match their on-screen sound sources. For example, the sounds of a runner’s feet striking the pavement should be synchronized within the corresponding visual images. Music can also be said to be synchronous with visual actions or cuts that precisely follow the beat or rhythm. Asynchronous sound does not match its sound source. Poor quality lip-sync is asynchronous sound, such as a film dubbed into a foreign language that fails to match the lip movements of the speaker. But asynchronous sound is not always poor-quality sound. In fact, asynchronous sound offers many exciting aesthetic possibilities, such as providing a basis for contrapuntal sound. Commentative sound effects can be used asynchronously to contrast with their corresponding visuals. One example is the substitution of a train whistle for a woman’s scream in Alfred Hitchcock’s The Thirty-Nine Steps (1935). Commentative, asynchronous sound effects can produce emotional effects or meanings that counterpoint rather than parallel their accompanying visual images.

Parallel versus Contrapuntal Sound

The emotional effect or conceptual meaning of sounds and images can be virtually the same or completely different. Speech, sound effects, and music can parallel the meaning or emotions of the visuals, or they can counterpoint them. The term counterpoint in music refers to two separate and distinguishable melodies that are played simultaneously. The same term has been applied to image and sound interaction in video and film. Contrapuntal sound has an emotional effect or conceptual meaning that is different from its corresponding visuals. Sounds and images are aesthetically separate and often contrast with one another.

Parallel sound, like musical harmony, blends together with its corresponding visuals. Like musical notes played simultaneously and in harmony, sounds and images can have parallel meanings or emotions that are mutually supportive. Suppose that the visually depicted events are sad or tragic but the accompanying music is upbeat and in a major key, so that it communicates a bright, happy, strong feeling. In this case, the music counterpoints the corresponding visuals. The same thing happens when sad music accompanies a happy event. But when sad, minor key music accompanies a tragic scene, the sounds and images parallel one another in emotional tone.

Speech sounds and sound effects can parallel or counterpoint their corresponding images. For example, the film musical Singin’ in the Rain (1952) begins with the main character, Don Lockwood, describing his path to Hollywood stardom. Lockwood gives a short autobiography to his fans in which he claims to have received his training and background at elite, high-class schools and cultural institutions. But what we see contradicts his voice-over narration. We see that he actually began his performance career in pool halls and bars and gradually worked his way into the movies as a stuntman. His elitist posturing provides a pseudo-sophisticated, tongue-in-cheek commentary on Hollywood. The meaning of what we see contradicts the meaning of what we hear, producing a powerfully humorous effect.

Composing Images for Prerecorded Music

The use of music in video and film is a rather complex art. It is important for directors to understand some of the basic aesthetic possibilities inherent in two approaches to combining images and music: (1) editing visual images to preselected, prerecorded music and (2) composing original music for video and film, even if the responsibility for the music is in the hands of a specialist, such as a music director, composer, or performer.

Visual images can be selected and ordered into a pattern that is prescribed by prerecorded music. For example, fast-paced music might be accompanied by rapid cutting of visual images and rapid action within the frame, and slow-paced music might call for less frequent cutting and slower movements. The visual action might reach its climax at the same time as a musical crescendo or swelling in the volume and intensity of the music. The timing of the visuals can be made to coincide with the timing of the music so that both begin and end at the same points and achieve a parallel structure throughout. Dancing and singing sequences require a high degree of synchronization and parallelism between the music and visuals. The music can be recorded in advance and used as a basis for the choreography. Prerecorded music establishes a basic structure and timing to which the performance and editing of visual images must conform, unless conscious asynchronization or contrapuntal relations between the sounds and images is desired.

Composing Music for Prerecorded Images

Music and Image Interaction

•  Intensify the action

•  Intensify the dramatic tension

•  Establish the period or location

•  Set atmosphere or mood

•  Stimulate screen emotion

•  Fill dead air

Another approach to music and image interaction is to compose original music for specific film or video sequences. Music composed for video or film usually serves one or more of the following functions: (1) intensifying the action or dramatic tension, (2) establishing the period or place, (3) setting the atmosphere or mood, (4) stimulating a specific emotion in conjunction with a character or theme, and (5) avoiding screen silence. Music rhythm can intensify action and create dramatic tension. The pace of music can increase with the speed of the action, such as a crescendo that accompanies a dramatic climax or crisis. Music can communicate time and place by virtue of its source, period, and style. Selecting a specific mode of music affects the overall mood or atmosphere. A specific melody can develop an emotion in conjunction with an important character or theme. Leitmotifs can intensify audience identification with specific people or characters and stimulate emotions. Finally, music can be used simply as a filler to cover silence or to attempt to create viewer interest during slow-paced visual action sequences.

Background music is all too frequently used to fill a void rather than to create a specific effect in conjunction with visual images. Careful selection and design of music is a much better approach to the problem. Original music for television and film can consist of sounds from a single instrument, such as a solo guitar or flute, or a fully orchestrated symphonic score. The number of musicians required and the complexity of the music can vary considerably depending on the specific needs and requirements of the project and the available budget. Sometimes a scarcity of materials and resources can be an advantage. Simple music and solo performers can be easier for beginning producers to obtain and control. New computer music programs and synthesizers make it easier to have original music composed, played, and recorded by one person. Apple’s computer program Logic, which accompanies Final Cut Pro 6, facilitates the creation of film and video music, especially for directors and editors with limited composing experience and low budgets. Regardless of the sophistication of the music, video and film directors should make every attempt to collaborate with composers and musicians so that the music can be designed and performed for their specific needs. Original music can be tailored to a video or film production much better than prerecorded library music, but in some cases the latter is more cost-effective.

PREPARING THE SHOOTING SCRIPT

Directors begin to apply aesthetic principles to concrete production problems when they plan a production. Production planning is usually done on paper. Directors specify shots and sound effects for each scene in the script as they prepare a final shooting script (Figure 5.21).

FIGURE 5.21 A shooting script should provide the director with enough information to shoot the scene as closely as possible to the vision the writer had when writing the script. If the script is not clear, then the director must make decisions on specifically how to shoot the scene.

image

After the shooting script is completed, shot lists are often written up for camera operators. Sometimes a storyboard consisting of still-frame drawings of every shot in the final shooting script is drawn up as a visual guide to production.

After carefully analyzing the script, a director begins to prepare a final shooting script by indicating specific types of shots, transition devices, and sound effects. Directorial terms for specific types of visual images and sounds must be thoroughly learned before a shooting script can be created. Shots are continuous recordings of actions within a scene made by a single camera. Abbreviations are used to specify camera placements and movements, such as ECU (extreme close-up) or MS (medium shot), which specify the desired distance of the camera from the subject. Where the camera is placed can have a considerable impact on what action is viewed or how a subject appears. Camera movements, such as CAMERA PANS RIGHT or CAMERA DOLLIES IN, are also specified in a shooting script, as are transitions between one shot and another, such as CUT TO, FADE OUT, FADE IN, and DISSOLVE. Camera movements add motion to the recording of a scene and can also change the perspective or point of view on a subject or action. Various transition devices are used to communicate changes of time or place to the audience. Sound effect designations, such as SFX (sound effect): PLANE LANDING, specify concrete sounds that should accompany specific images.

Preparing a final shooting script allows a director an opportunity to shoot and reshoot a video or film production on paper at minimal expense before actual recording begins. To compose a final shooting script, a director must understand a full range of aesthetic possibilities. There are many different ways to record a specific scene in any script. A director interprets the action and decides on the best shots, transition devices, and sound effects for each scene. Directors select specific recording techniques, such as different types of shots, for each scene on the basis of the aesthetic approach they have chosen. A director’s overall aesthetic approach in large part determines the meaning of images and sounds by setting a context for interpretation. A realist approach often involves the use of techniques that help to preserve an illusion of reality through a transparent or unnoticed style. Modernist and postmodernist approaches call attention to techniques and highlight a director’s manipulation and control over the recording medium and subject matter.

Some types and combinations of visual images and sounds can be realist in one context but modernist or postmodernist in another. For example, jump cuts are discontinuities in human actions or movements from one shot to the next. Because they disrupt the continuous flow of realist time and space, jump cuts are often considered a modernist technique, but they are also used in news and documentary interviews.

A jump cut indicates that something has been removed and is often considered more honest than using techniques that disguise the fact that editing has been done. From a modernist perspective, jump cuts, such as those in Jean-Luc Godard’s Breathless (1959), call attention to directional control by breaking down the illusion of temporal continuity or the smooth, continuous flow of time from shot to shot. But from a realist perspective in news and documentary productions, jump cuts make it clear that the recording of an event has been edited.

PRODUCTION COORDINATION

Video and film directors are personnel managers as well as artists using the media of moving images and sounds. Directors coordinate production by working with their staff, crew, and performers. Frequent production meetings facilitate coordination. A cooperative, collective effort has to be carefully orchestrated and managed by the director if a quality product is to be achieved. The director must be a good judge of character.

Production Meetings

Frequent production meetings provide the director and the production staff with an opportunity to work out important details and problems collectively. Before actual production, the director usually meets with key staff members, such as the producer, the art director or scenic designer, and the lighting director. The overall goals and objectives of the film or video project are clarified during these meetings. Everyone must understand the overall purpose and design of the production to prevent members of the production team from working at cross-purposes. Everything must be worked out and all problems solved before actual live video production, because live production means that there is no postproduction and therefore little or no room for mistakes during production.

The director must be able to communicate effectively with the staff if these problems are to be quickly and efficiently resolved. The more talented, independent, and opinionated the staff, crew, and performers are, the more likely it is that problems and disputes will arise, unless a common purpose has been collectively determined or hierarchically imposed at the beginning. The director’s authority may be questioned, and his or her status with the staff, crew, and other performers jeopardized, if an unruly participant is allowed to dominate the proceedings. The director must be explicit and authoritative about commands. The director must also listen to the needs, desires, and problems of the staff, crew, and talent. Production meetings provide the director with an opportunity to exercise authority and give commands, but also to listen to the ideas, needs, and problems of others. Effective managers are often good listeners.

Casting

To cast a specific performance effectively, the director must have a firmly established interpretation of each character or role. Each role, however small, is important in terms of the quality of the final product, and a video or film program is often only as good as its worst performer. It is often said that almost any director can evoke an excellent performance from an experienced, talented performer but that good direction is most evident in the quality of smaller roles and bit parts. Good casting depends on a director’s understanding of at least three factors: the audience, the character or role, and the physical appearance of specific performers. The natural look and feel of a performer is probably the most important factor in terms of his or her appropriateness for a specific role, although skilled actors can drastically change their appearance and still appear naturally suited to a role. Robin Williams did in Mrs. Doubtfire (1993) (Figure 5.22, A and B).

FIGURE 5.22 The range actors can portray serves them well, especially if they are asked to take the part of someone as different from their own persona as Robin Williams was able to do in Twentieth Century Fox’s production of Mrs. Doubtfire (A) and in Touchstone/Columbia Pictures’ Bicentennial Man (B). (Courtesy of Twentieth Century Fox and Touchstone/Columbia Pictures.)

image

Casting sessions often consist of actors reading a short scene from the script so that the director and producer can evaluate their suitability for a role. Sometimes several actors will be auditioned before a part is cast. Directors often have to deal with performers who have different levels and types of acting experience. Inexperienced actors need to be explicitly told what is wanted. Most fail to understand or prepare themselves for the rigors of video and film acting. Inexperienced performers have difficulty relating to an awkward, unfeeling camera. Constant feedback and praise from a director can greatly improve the quality of a performance. Experienced professionals, on the other hand, may require more freedom in some situations and a firmer hand in others. Most directors work somewhere in between two extremes of management styles: Either a director allows the actor to find his or her role, or the director takes an authoritative approach in order to develop a consistent interpretation.

Rehearsals

Once the performers have been selected, a director can begin a preliminary run-through of the production by helping the actors to develop their specific characters. Preliminary practices of a performance are called rehearsals. Rehearsals sometimes begin with reading sessions, in which actors sit around a table and read their respective parts before actually performing them. Many rehearsals may be necessary before the actors are fully prepared to perform unerringly before the camera(s). All the bugs have to be worked out before a performance can proceed without problems or disruptions. The final rehearsal, which usually takes place with the sets fully dressed and the performers in costume, is called a dress rehearsal. It simulates the actual recording session in virtually every respect.

Multicamera and live productions usually demand more rehearsal time than single-camera productions, because entire scenes or programs are recorded at one time rather than broken up into segments for a single camera. The entire performance must be worked out to perfection so that even minor mistakes are avoided during actual recording. Actors in single-camera productions do not always know how one shot relates to another. Single shots are often recorded in isolation, and performers cannot build a performance in perfect continuity as they would on the stage or for a multiple-camera production. Close-ups are often recorded out of sequence, for example. The director must be able to provide the performer with a context that will help the actor achieve a proper performance level so that shots can be combined during postproduction editing. One of the director’s primary responsibilities during rehearsal and production is to ensure that the actors maintain continuity in the dramatic levels of their performances from one shot to the next. Many directors prefer to have a complete rehearsal in advance.

Performer and Camera Blocking

The director usually stages and plots the action in two distinct stages: performer blocking and camera blocking. Before selecting final camera placements, angles, lenses, and so on, the director will frequently run through the basic actions to be performed by the talent. This is called performer blocking. A director must carefully preplan the entire performance in advance. Only rarely are the performer’s movements precisely set during performer blocking alone. Instead, a general sense of the action is determined, which facilitates camera blocking and prepares the performers for actual recording.

Camera blocking refers to the placement of cameras so that they can follow the movements of the talent. Whether several cameras or a single camera will actually record the action, the director must be able to anticipate the types of shots that will provide adequate coverage, dramatic emphasis, and directional continuity from shot to shot. Shot lists can be drawn up and supplied to the camera operator(s). These lists are a helpful guide to camera operation during blocking sessions and actual production. Every consecutive shot in each scene for each camera is written on a piece of paper that the camera operator can tape to the back of the camera for easy reference. Shot lists indicate types of shots and camera movements called for in the final shooting script. In some recording situations, there is minimal time to block the cameras and the performers separately, and the two stages are combined. During camera blocking, the performers, director, and camera operator(s) exchange ideas and discuss problems as the action is blocked or charted on the floor or on location. The director refines his or her conception and interpretation of the script, making notations of any deviations from previous shot selections. Performers not only learn and remember their lines, they must also remember their marks, that is, the precise points where they must position themselves during actual recordings (Figure 5.23).

FIGURE 5.23 A director blocks performers and cameras before and during rehearsal in order to determine the best placement of each to provide the framing and movement intended for each shot. A plot drawn as if looking straight down on the scene helps the director to visualize where cameras and performers need to be blocked.

image

MULTIPLE-CAMERA DIRECTING

Directing several cameras simultaneously requires a different approach from that of single-camera recording. The preproduction planning stages are always extensive, because major changes are more difficult to make once recording has begun. Performers must learn the lines of dialogue for several scenes, because more script material will be recorded in a single session. Camera operators must anticipate what camera positions, lens types and positions, and framing they are to adopt for upcoming segments. Ample time and space must be provided for the cameras to be moved during recording. Every detail must be worked out in advance, and a detailed shooting script or camera shot sequence must be provided to each key member of the production team.

When the recording session has been properly planned and practiced, tremendous economies in time and expense can be accomplished by using multiple-camera recording. But a multiple-camera situation can also be extremely frustrating if a key individual is improperly prepared or the director has failed to anticipate all the problems that can arise. Murphy’s law—If anything can go wrong, it will—is an optimistic expectation in multiple-camera and live television recording situations where directors, crews, and talents are insufficiently prepared.

For multiple-camera recordings of uncontrolled events, such as sporting events, cameras are often placed in fixed positions. Each camera operator is responsible for covering a specific part of the action from one position. The director of a live production, such as coverage of a sporting event, may have to watch and control as many as 10 cameras, some of which are connected to slow-motion recorders. The director must be able to respond instantaneously to any action that occurs, rapidly cutting from one camera to another. The director selects from among the images displayed on a bank of television screens. Because only minimal scripting is possible, the action and atmosphere within the director’s control room itself often become intense during a sporting event or similar production. Accurate decisions must be made quickly. To anticipate actions and cuts, directors must be intimately familiar with the particular sport.

Timing

An important function performed by the director is timing. The control of program pace in terms of the speed of dialogue, actions, and editing is one form of timing. As discussed in Chapter 4, Scriptwriting, dramatic pacing is a subjective impression of time in video, film, and sound productions. Through effective editing, a sequence of action can be made to seem longer or shorter in duration to the audience. Other types of timing are equally important in the production process.

Running Time

A director is responsible for ensuring that the program length or actual running time of a completed program conforms to the required length. In video production, running time should be distinguished from clock time. The latter refers to the actual time of day on the studio clock. Each video or film program or program segment has its own running time, which is the exact duration of the program, regardless of what time of day it is actually shown. During live productions, a timer is used to calculate the running time of each program segment so that the total running time will conform to the scheduled overall length of the program.

Timing in Production

Television commercials, public service announcements, and broadcast or cablecast programming must be accurately timed during production. When recording a commercial, for example, a director must obtain shots that will add up to exactly 10, 30, or 60 seconds. The screen time of the various shots and vignettes must add up to the exact screen time of the commercial format that has been chosen. Live video production demands precise screen timing with a timer as well as a studio clock, as the show cannot be reedited, lengthened, or shortened.

Backtiming is the process of figuring the amount of time remaining in a program or program segment by subtracting the present time from the predetermined end time. Music is sometimes backtimed so that it will end at the conclusion of a live production. This means that if the music should last three minutes, you backtime three minutes from its end and start playing it three minutes before the end of the program, gradually fading it up. In other words, if you want it to end at 6:59, you backtime it three minutes and start it at 6:56. In multiple-camera video production, the talent is often told how much time remains by means of hand signals. Five fingers, followed by four, three, two, and one, indicate how many minutes of running time remain for that segment. Rotating the index finger in a circle indicates that it is time to wind up a performance because the time is almost up. A cutoff signal (the hand cuts across the neck, as though the stage or floor manager’s own head is coming off) indicates the actual end of a segment or show.

On-the-Air Timing

Prerecorded videotapes such as commercials, which will be inserted into a program as it is being broadcast or cablecast, must be accurately backtimed or cued and set up on a playback machine. A countdown leader displaying consecutive numbers from 10 down to 0 is placed just ahead of the prerecorded pictures and sound. The numbers indicate how many seconds are left before the start of the prerecorded material. The playback can then be prerolled, that is, begun at the appropriate number of seconds before the commercial is due to start.

Production Switching

In multiple-camera video production, the director supervises virtually all of the editing in the control room during actual production. Production editing is done by means of a switcher, a device that allows shots to be selected from among several different cameras instantaneously. The director usually commands the technical director (TD) to change the transmitted image from one camera to another. (In many local stations, the director actually operates the switcher.) The TD then pushes the correct buttons on the switcher. Each button on the switcher is connected to a different camera or image source. When the TD pushes a button, the switcher automatically substitutes one picture for another.

The TD and the director view these changes on television monitors as they are taking place. The images sent out of the switcher can either be directly transmitted and broadcast during live production or recorded on videotape. A videotape recording can be used for subsequent postproduction editing and delayed broadcast, cablecast, or closed-circuit showing. A switcher is both an electronic editing device and a special effects machine. The TD cannot only cut from one image or camera to another, but he or she can also fade in, fade out, dissolve, wipe, key, chroma key, and superimpose images. Various transition devices can be used in changing from one image or camera to another.

A switcher consists of a series of buttons organized into units called buses (Figure 5.24). There are three types of buses: preview, program, and special effects or mix. Individual buttons within each bus are linked to specific sources, such as Camera 1, Camera 2, Camera 3, a videotape player, a remote source, still store, character generator, digital generator, and a constant black image (Figure 5.24).

FIGURE 5.24 The simplest switcher would contain at least four buses: one for on-air (program), one to check shots ahead of time (preview), and two for mixing or wiping shots (mix/effect, or M/E). In addition to all other inputs to the switcher, the program bus must also contain a button for switching to the M/E bus.

image

Each bus has one button assigned to each of these image sources. Thus, a bus allocated to previewing images before sending them out of the board, called a preview bus, would have at least nine individual buttons connected to the nine image sources cited earlier: Cameras 1, 2, 3, videotape player, remote source, still store, character and digital generators, and a constant black image. When one of these buttons is pressed, the image from that source appears on the preview monitor. A second bus having the same number of buttons is assigned to the actual program feed; on a simple switcher, this is the signal that will actually be transmitted or recorded.

A switcher having just two buses would only allow the TD to preview images and to cut directly from one image to another. If any special effects are to be created, the switcher must have special effects or mix buses. These two effects or mix buses are usually designated “A” and “B.” To send any visual signal on the effects buses out of the switcher, a button designating the effects buses must be activated on a secondary program bus called the master program bus.

The master program bus acts as a final selection switch, determining what the switcher will transmit. The TD can select the program bus (which contains one of the nine visual sources) or the effects bus by depressing one of these two buttons on the master program bus. A master preview bus is also available on more sophisticated switchers, so that an effect, such as a split-screen or digital effect, can be previewed before recording or transmission via the master program bus.

Preset multilevel switches can handle complex digital changes of shots and sequences. Digital switchers are designed with switch sequence storage so that any number of shot changes can be set and stored in memory. As each switch is called for, the video operator simply calls up that specific change on the built-in computer without having to set each aspect of the switch such as multiple levels of keys and layers or complex transitions. Today’s video switchers also must be able to change shots between standard definition (SD) and high definition (HD) video sources. Complex visual switches may also require matching audio sources changes to stay in sync with the production.

Director’s Commands

A director must communicate accurately with the entire crew to coordinate a production effectively, but communication with the TD is critical because the TD’s response and action determine what pictures will be seen on air or fed to the tape. Operating the switcher during production requires careful preparation and infallible communication between the director and the operator of the switcher. The TD must know in advance exactly what switcher operations the director will call for and the order in which he or she will call for them. It is easy to become confused and push the wrong button or misunderstand the director’s commands. It is the director’s responsibility to convey clear and distinct commands to the TD and to provide adequate time between the preparatory command and the command of execution.

Video and film directors have developed relatively precise terminology and methods with which to communicate with their cast and crew. The method is based on a two-step system of first warning of an impending command with a preparatory (prep) command, followed by a command of execution at the precise timing moment. A preparatory command always begins with either the words “stand by” or “ready.” This tells everyone that a new command is about to be announced, so pay attention. The prep command needs to be detailed and precise and clearly stated so that the crew and cast directly involved know what to prepare themselves for when the command of execution arrives a few seconds later. The command of execution needs to be as short and as precise as possible, because that command determines the precise moment when an action is to take place.

If the director wants Camera 3 to zoom in to a two-shot, for example, a typical command series would be as follows.

Simple Command Sequence

image

More often a single command series is directed at more than one crew or cast member. The beginning command of a newscast might be directed at the TD (the switcher transition), the audio operator (to open a mic), the camera operator (who gets the opening shot), and the floor manager (to give the anchor a stand by and a cue to start talking).

Complex Command Sequence

image

The order of the commands and the precise nature of their execution are critical. Sloppy or inaccurate calls by a director will guarantee a sloppy production. Commands to the TD and camera operators are especially important because in both cases some preoperation activities may need to be carried out before the command can be followed. The switcher may need to have a complex set of buttons aligned or set up, or a camera or lens may need to be moved or adjusted before the shot is ready.

Live-on-Tape Recording

A live-on-tape (multiple-camera) director can use the techniques of live multiple-camera video to record events quickly and efficiently but also has the option to change the shot sequence during postproduction. This is accomplished by recording the images from several cameras simultaneously while at the same time making some editing decisions on the switcher that are recorded on a separate recorder. Editing decisions made during production can then be changed during postproduction by inserting different camera shots. This method gives the director maximum flexibility to produce a program economically in the shortest possible time without jeopardizing the quality of the final product, because changes can always be made later. In this way, multiple-camera recording techniques can be combined with the techniques discussed next, allowing the director to benefit from the advantages of both methods.

SINGLE-CAMERA DIRECTING

The number of cameras used and the order and time frame of recording or filming shots constitute the major differences between multiple- and single-camera directing. The types of shots are the same, but the physical arrangement and order of shooting those shots differ between the two production modes. Recording with a single camera usually takes longer than multiple-camera recording. The lighting, camera, and set are sometimes moved and readjusted for each shot. Better quality images are often obtained using this method, as fewer compromises have to be made in terms of recording logistics. Each shot is composed and the action repeated so that an optimal recording is made. But potential problems can arise in terms of discontinuity or mismatched action from one shot to the next. The director and the script continuity person must observe and duplicate every detail recorded or filmed in the master shot during the shooting of the inserts.

Both film and video use three different types of setups for one camera: (1) master shots, (2) inserts, and (3) cutaways. Single-camera recording normally begins with a shot of the entire action in a scene, or as much of the complete scene as it is possible to record in a single shot. This is often called a master shot. Master shots are usually, but not always, long shots. Specific actions occurring within the master shot are then repeated after the camera has been placed closer to the subject for shots known as inserts. Inserts are usually the medium shots and close-ups indicated in a script. Master shots and inserts may be rerecorded several times before an acceptable recording has been made. Specific recordings are called different takes of the same shot. A script continuity person then marks the shooting script (as shown in Figure 5.25) with the number of the exact shot specified in the script, and each take is circled at the beginning point of actual recording.

FIGURE 5.25 Continuity marks on a script are made as the scene is shot. The continuity clerk indicates when a shot starts and ends with codes agreed upon with the editor. Often the codes will indicate the framing of a shot as well as its length and the number of takes. These kinds of markings are invaluable to the editor during postproduction.

image

A line is drawn vertically through the script to the point where actual recording of that take ends. Inserts are normally extended before and after the exact edit points in the script to allow for overlapping action and a range of editing choices. A marked shooting script provides a complete record of actual recording in terms of master shots and inserts. Cutaways are additional close-ups and medium shots of objects or events that are not central parts of the action and are often not specified in the script. They can be inserted into a scene to bridge mismatched actions or to hide mistakes within or between a master shot and an insert. The master shot or long shot can act as a safety net in the event that matching medium shots or close-ups specified in the script do not prove satisfactory. A continuously running long shot or master shot can be quite boring in comparison with using several different long shots, medium shots, and close-ups for emphasis and variety. But the knowledge that the master shot covers the entire action and can be used at any point can be of some comfort to the editor.

Insert shots, which record some of the same actions as the master shot but from a closer camera position or a different angle of view, are called inserts or cut-ins, because they will be cut into the master shot. The director and the script continuity person must observe and duplicate every detail recorded in the master shot during the recording of the inserts. The actors must perform the same gestures, wear the same clothing, and repeat the same actions and lines of dialogue if actions are to overlap and match from one shot to the next. In extremely low-budget situations, where it is impossible to record several takes of each insert, a director is well advised to record a few cutaways for use in bridging mismatched actions between shots that are discovered during postproduction editing. It is the director’s responsibility to provide adequate coverage of events and actions so that a program can be edited with minimal difficulty. Good coverage provides insurance against costly reshooting.

Cutaways

Cutaways are shots of secondary objects and actions that can be used to hide mismatched action and to preserve continuity or simply to add depth and interest to the primary action of a film or television program. Cut-ins depict actions that appear within the frame of master shots, whereas cutaways depict actions and objects outside the master shot frame. In single-camera news recording, a reaction shot of the reporter or interviewer is sometimes used to bridge gaps or to avoid jump cuts in a condensed version of an interview or simply to provide facial expressions to comment on what is being said. Close-ups of hands gesturing and relevant props can also be used as cutaways. They can be inserted at almost any point to bridge mismatched action in master shots and inserts or simply to add more detail to the spatial environment. Cutaways provide an editor with something to cut away to when editing problems are discovered.

Shooting Ratios

All single-camera directors try to get an acceptable shot in as few takes as possible; nonetheless there can be considerable variation in shooting ratios from one production to another. Shooting ratios, which refer to the ratio of visual material shot to visual material actually used, can range from about 5:1 to 100:1 in different types of production situations. Obviously, more takes of each shot translate into higher shooting ratios. Network commercials often have the highest shooting ratios. At the other end of the spectrum, student productions often have shooting ratios as low as 5:1 or even 3:1, because of limited production funds. Low-budget situations call for highly efficient production methods.

Director’s Terminology

Because a single-camera director is normally present on the set with the camera operator rather than isolated in a control room, as in multiple-camera production, he or she can communicate directly with the crew and talent. Directorial terminology for camera placements and movements is generally the same as that for multiple-camera recording, but a few commands are quite different. When the crew and the talent are ready to record a single shot, the director says, “Roll tape” to the videotape (in video) or audio (in film) recordist and “Roll film” to the film camera operator. When the tape or film is up to speed, the operator says, “Speed” or “Camera rolling.” The director then calls, “Slate,” and a grip or camera assistant slates the shot by calling out the scene, shot, and take numbers, which are also written on a board called a slate. In film, the slate has electronic or physical clap-sticks that are brought together so that separate sounds and pictures can be synchronized later. The scene, shot, and take numbers displayed on the slate are used as a reference during postproduction editing. They are usually written down on a camera report sheet, which is sent to the film laboratory or used by the videotape editor.

When the talent is ready and the slate has been removed from the shot, the director says, “Action” and the performance begins. When a shot is over or a problem develops in midshot, the director says, “Cut.” If the director wants a scene printed for later viewing, the command “Print” will be given. The “Print” command is noted on the camera report. Because editing can be done during postproduction, there is no need for the director to communicate with a technical director (switcher operator) during actual production. Editing decisions will be made later (Figure 5.26).

FIGURE 5.26 The director on a single-camera shoot stands next to the camera to give directions to both the camera operator and the talent. Either the director or a production assistant keeps an accurate record of each shot on a camera log.

image

Summary

Video and film directors are artists who can turn a completed script into a shooting script and produce works of art from recorded visual images and sounds. To prepare a shooting script, a director must know when to use different types of shots. Shots can be categorized by camera-to-subject distance, camera angle, camera (or lens) movement, and shot duration. Directors also know how to control various aspects of visual composition and image qualities, such as tone, scale and shape, depth, and speed of motion, and the use of various transition devices and special effects.

In scene construction, conventional continuity often begins with a long shot and gradually moves closer to the subject as the action intensifies. Continuity suggests an uninterrupted flow of time, with no apparent gaps or mismatched actions from shot to shot. Video and film are temporal and spatial arts. Classical continuity refers to the continuity of time and continuity of space maintained in many classical Hollywood films (e.g., most films made in Hollywood between 1920 and 1960).

The aesthetic use of sound is extremely important. Although sound can be used simply to accompany and complement the visuals, it can also be treated as an independent aesthetic element. There are four basic categories of sound: speech, music, sound effects, and ambient noise. A director must be familiar with the basic elements of music, such as rhythm, melody, harmony, counterpoint, and tonality or timbre, as well as different types of music. Sound effects are sometimes used to enhance an illusion or reality or to create imaginative sound impressions. Ambient noise, also called background sound, is present in any location and can be used to preserve temporal continuity and to create an illusion of spatial depth. A director can affect the perception of temporal continuity through the selection and ordering of sounds. Mismatched levels and gaps in the presentation of sounds can create discontinuity, thus disrupting the flow of time. It is possible to condense and expand time without disrupting the illusion of continuity, however.

Music composed for television or film often performs one of the following functions: intensifying the drama, establishing the period or place, setting the mood or atmosphere, stimulating a specific emotion in conjunction with a character or theme, or simply filling in and avoiding silence. The director and the composer should collaborate with one another, fully and creatively exploring the artistic potential of visual image and music interaction.

The director supervises the creative aspects of television and film production by coordinating the production team, initiating and chairing preproduction and production meetings, casting the film or television program with the producer and casting director, and organizing production rehearsals.

The director’s function can vary considerably between multiple-camera and single-camera recording situations. The multiple-camera video director frequently sits in a control room isolated from the talent and crew during actual recording. In live and multiple-camera production, directors are usually directly involved in the selection of specific types of shots and in the creation of transition devices and special effects. The multiple-camera director supervises the movement of several cameras, using an intercom, and controls the editing by having the TD punch buttons on the switcher, changing the main signal from one camera or source to another. The single-camera director, on the other hand, is usually present on the set during the shooting and works directly with the talent and crew during the period of time between shots, when the camera is being moved and the lighting and sound recording devices reset. The editing of single-camera production is usually left to a specialist who cuts the film or electronically edits together videotape during postproduction.

EXERCISES

1.  View a scene or sequence from a completed production repeatedly and write a postproduction shooting script or shot analysis for it based on actual shots in the finished product. Compare your shooting script or shot analysis for this segment to a published version to determine if you have made proper use of shooting script terms and concepts.

2.  Take a segment from a completed and published script and attempt to transform it into a shooting script by adding specific shots, sound effects, and so on. Use techniques that are consistent with a realist, modernist, or postmodernist approach when creating your shooting script. Do this exercise for both a multiple-camera production and a single-camera production.

3.  Record a television program without watching it. Then play it back without watching the screen. Determine if you are able to follow and understand the program without the visual side of the story line.

4.  Record another program, only this time don’t listen to it. Then play it back, and watch it with the sound turned down. Determine if you are able to follow and understand the program without the audio portion of the story line.

5.  Using either of the tapes from Exercise 3 or 4, carefully listen to the sounds and create a chart by time in seconds. On the chart, indicate when there is music, sound effects, narration, wild sound, and dialogue. Keep each type of sound in a separate column to determine how much of the audio portion of the program is music, SFX, narration, wild sound, or dialogue.

6.  Using either of the programs recorded in Exercise 3 or 4, create a chart showing each transition between shots. List cuts, dissolves, wipes, digital effects, and fades to or from black. Total the number of transitions and how many of each type were used in the production.

Additional Readings

Andrew, J Dudley. 1976. Major Film Theories, Oxford University Press, New York.

Benedetti, Robert. 2002. From Concept to Screen: An Overview of Film and Television Production, Allyn & Bacon, Boston.

Block, Bruce. 2008. The Visual Story: Creating the Visual Structure of Film, Video, TV and Digital Media, Focal Press, Boston.

Bordwell, David, Thompson, Kristin. 1986. Film Art: An Introduction, second ed. Knopf, New York.

Burch, Noel. 1973. Theory of Film Practice, Trans., Helen R. Lane. Praeger, New York.

Cury, Ivan. 2006. Directing and Producing for Television: A Format Approach, third ed. Focal Press, Boston.

Dancyger, Ken. 2006. The Director’s Idea: The Path to Great Directing, Focal Press, Boston.

DeKoven, Lenore. 2006. Changing Direction: A Practical Approach to Directing Actors in Film and Theatre, Focal Press, Boston.

Douglass, John S, Harnden, Glenn. 1996. The Art of Technique: An Aesthetic Approach to Film and Video Production, Allyn & Bacon, Needham Heights, MA.

Gross, Lynne S, Ward, Larry W. 2007. Digital Moviemaking, sixth ed. Wadsworth, Belmont, CA.

Hanson, Matt. 2006. Reinventing Music Video: Next Generation Directors, Their Inspiration and Work, Focal Press, Boston.

Irving, David K, Rea, Peter W. 2006. Producing and Directing the Short Film and Video, third ed. Focal Press, Boston.

Kindem, Gorham. 1994. The Live Television Generation of Hollywood Film Directors, McFarland, Jefferson, NC.

Kingson, Walter K, Cowgill, Rome. 1965. Television Acting and Directing, Holt, Rinehart & Winston, New York.

Musburger, Robert B. 2005. Single-Camera Video Production, fourth ed. Focal Press, Boston.

Proferes, Nicholas. 2008. Film Directing Fundamentals: See Your Film Before Shooting, third ed. Focal Press, Boston.

Rabiger, Michael. 2007. Directing: Film Techniques and Aesthetics, fourth ed. Focal Press, Boston.

Rabiger, Michael. 2004. Directing the Documentary, fourth ed. Focal Press, Boston.

Shyles, Leonard C. 2007. The Art of Video Production, Sage.

Thomas, James. 2005. Script Analysis for Actors, Directors, and Designers, third ed. Focal Press, Boston.

Utterback, Samuel. 2007. Studio Television Production and Directing, Focal Press, Boston.

Ward, Peter. 2003. Picture Composition, third ed. Focal Press, Boston.

Watkinson, John. 2000. The Art of Digital Video, Focal Press, Boston.

Wilkinson, Charles. 2005. The Working Director: How to Arrive, Thrive, and Survive in the Director’s Chair, Michael Wiese Productions, Studio City, CA.

Zettl, Herbert. 2008. Sight, Sound, and Motion: Applied Media Aesthetics, fifth ed. Wadsworth, Belmont, CA.

Zettl, Herbert. 2009. Television Production Handbook, tenth ed. Wadsworth, Belmont, CA.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset