14

Virtual Cinematography

As a filmmaker, by deciding early on (ideally during pre-production) that you’re going to finish the production using a digital intermediate pipeline, you can gain an incredible advantage. Until recently, most digital intermediates typically were designed as a direct substitute for chemical laboratory processing and didn’t take full advantage of the available digital options. But as technology advances, and as filmmakers become more knowledgeable of the various possibilities, digital intermediate pipelines will move toward virtual cinematography.

14.1 What is Virtual Cinematography

Many definitions of “virtual cinematography” describe it as the ability to create a shot or sequence entirely with a computer, using a number of digitally created elements to compose it. However, it’s much more than that. It’s a paradigm for shooting to maximize the range of options afforded by a digital pipeline during post-production. Or, to put it another way, virtual cinematography enables filmmakers to make creative cinematography decisions during editing, rather than having to commit to those decisions when shooting. This level of control can vary from making simple lighting adjustments to potentially altering camera placement after shooting, techniques demonstrated in films such as The Matrix trilogy.

Why would you want to do this? Because it means that a scene can be lit in a much simpler (and therefore faster and less-expensive) way and then later, tweaked to the same aesthetic level when far more time is available (and external factors are less influential), and, in some cases, with an unprecedented level of control.

The downside is that many lighting strategies work extremely well when used practically (on the set), and they don’t have a simple digital equivalent. Second, many cinematographers are much more comfortable creating a lighting effect when shooting rather than estimating how the effect might look at a later date. Further, the purpose of creative lighting is to evocate a sense of mood within the scene. It’s reasonable to assume, therefore, this sense of mood is transferred directly into an actor’s performance on set.1

In many cases, however, the advantages of digital cinematography far outweigh the disadvantages. Take, for instance, the stylized “bleach bypass” chemical-processing effect. In a bleach-bypass process, the silver halide crystals aren’t removed from the film negative during development. The net effect is that a monochrome layer (the silver crystals) is superimposed over the color image. When printed, the picture benefits from increased contrast and reduced saturation. In the digital intermediate, this process can be emulated by simply reducing saturation and ramping up the contrast at the colorcorrection stage. However, the process can also be taken much further in the digital realm than its chemical counterpart. First, the level of the bleach-bypass effect can be adjusted. Digital colorists have complete and independent control over both the level of contrast, and the level of image saturation. In the traditional process, it’s possible to limit the bleach-bypass effect to some degree by adjusting the ratio and timings of the chemical baths, but it can’t be achieved with the same level of accuracy and the interactive feedback of digital systems. Second, the effect can be applied selectively—that is, to a specific part of an image—in the digital process. With a digital grading system, different grades can be applied to selected areas of the image and adjusted over time. With a chemical process, selective grading requires extensive masking and duplication processes, which can reduce the image quality and lack the accuracy of the digital equivalent (to say nothing of being extremely difficult to do). A digital system, on the other hand, can produce a bleach-bypass effect limited to a single part of a shot, such as a single character or part of the background, processes that are either unachievable or else highly difficult with chemical processes.

Ultimately, when embarking on a virtual cinematography methodology, it’s imperative to know in advance the techniques that can, and those that can’t, be replicated accurately and easily later. This chapter attempts to cover many of the more common strategies, as well as to outline some yet unexploited effects that are possible with a digital pipeline.

The Limit of Facilities

Up until recently, facilities and post-production houses that offer a digital intermediate service don’t cater to the virtual cinematographer. The majority of the digital intermediate market has offered an equivalent of a lab process that focuses almost exclusively on providing more comprehensive grading and mastering services than provided by chemical labs. It’s somewhat ironic that the market has overlooked many real strengths of a digital pipeline, but then the industry is still in its infancy. The primary concern has been the accuracy and integrity of matching monitor color to film projection color. Presumably this focus will shift as the potential of virtual cinematography is realized.

14.2 Digital Pre-Production

The majority of productions begin with a script. And the majority of those scripts are written using (or at least exist on) computer systems.

Interestingly, what happens next is that the scripts are printed onto paper, with hundreds of copies handed to various people, pages updated as necessary, notes scribbled on them, requiring careful, meticulous tracking of exactly which copy of a given page of the script is the correct, up-to-date one. A lot can be said for paper, especially when comparing it to using computer systems. Paper doesn’t run out of power, it’s easy to modify, and it doesn’t randomly crash at the most inconvenient time. In fact, many pre-production phases are achieved almost entirely without the use of any digital systems.

However, a good case can be made for keeping at least part of the preproduction phase digital: collaboration. Take the script, for example. The most up-to-date, complete version can be stored digitally, accessible to those who need it. Updating the script can notify all the recipients of the changes straight away, highlighting them, possibly with notes as to why particular changes were made. Older versions can be archived, available when needed for reference. During shooting, more notes can be added to the script, notes describing dropped shots or camera notes about particular takes. Further down the line, the script can be fed into editing systems, allowing editors to more efficiently organize shots and assemble sequences. When outputting, lines of dialog can be extracted from the script and used to generate captions.

The same is true for other pre-production material, such as storyboards (which can be scanned and used as placeholders for shots in the editing system). Furthermore, digital “pre-visualization” systems can test shots and create lighting plans before shooting begins.

14.2.1 Pre-Visualization

Pre-visualization encompasses the use of storyboard sketches to plan shots and directions in an iconic format, resulting in a plan of a film that closely resembles a comic book. With the use of a digital system, it’s also possible to compose “animatics”—animated storyboards that impart a sense of motion as well as composition and framing. Many filmmakers create animatics by using cheap cameras to shoot test scenes, and then they roughly cut them together for appraisal.

With the use of computer-generated material, 3D shots involving simple or complex cinematography can be planned in advance, taking into consideration such factors as set design, costume, and lighting. Software such as Antics Technology’s Antics (www.antics3d.com) enables the filmmaker to quickly create scene mock-ups, complete with furnishings and virtual actors and then make adjustments to the position of the camera.

Filmmakers may ultimately use a system that includes a variety of pre-visualization—from storyboard sketches, photographs, and video footage, to 3D models created by the visual effects department—to create a visual plan of the film before on-set shooting begins.

images

Figure 14–1   Antics Technology’s Antics Software is a simple but effective previsualization tool that enables you to position actors, cameras, and other elements before you begin shooting

14.3 Shooting Digitally

There are many advantages to recording to a digital format as opposed to analog ones, such as video or film. The most obvious advantage is the easy transfer of material when using a digital intermediate pipeline, allowing immediate editing and copying of footage.

Many additional benefits are available. When you shoot a scene digitally, you can monitor the footage with much greater flexibility than with other means. Multiple copies of the camera output can potentially be viewed simultaneously and across great distances. In addition, a number of analytical processes can be run to check the output for potential problems. Systems such as Serious Magic’s DV Rack (www.seriousmagic.com) provide a whole host of monitoring options, such as vectorscopes, waveform monitors, and audio monitors to run as software modules that intercept recorded data. This process costs a great deal less, and the result occupies much less space than its real-world counterparts.

Prolonged recording is also possible. When you’re recording straight to a central server, the recording can continue until the disks are full, which may take several hours, depending upon the system’s capacity.2

Film vs. Video as Capture Media

The question of whether to shoot on film or video is quite an old debate. Historically, it’s an easy question to answer: if you want your production to be shown in a cinema, you should shoot on film. Even when the aim isn’t a cinema release, shooting on film and then transferring to video can result in superior quality images than if you just shoot on video. However, this process tends to be somewhat more expensive.

With a digital intermediate environment, and with the current prevalence of HD video cameras, the debate has become re-energized. On paper at least, HD video has many of the same characteristics as film. Sony’s HDCAM SR video format is uncompressed, meaning that it captures a wide range of color with a high level of precision, and its working resolution is very close to that of 2k film. In addition, it can record scenes in a logarithmic color space, which provides a response to light that’s similar to film. Another obvious benefit is the ease of transferring any video format to a digital intermediate pipeline (compared to the difficulty of transferring film), and conforming using timecodes is much easier. Finally, much effort (and money) can be saved by not having to process entire reels to remove dust and scratches.

The difficulty of replacing film with any video format is that many filmmakers simply enjoy working with film. A quality issue is involved as well. 35mm film is thought to have an inherent resolution of at least 4k. Therefore, until video cameras can capture the same level of detail, film will simply look much sharper. Coupled with this consideration is the fact that video pixels are regularly arranged rectangles, whereas film is made up of randomly shaped grains, which are much easier on the eye and create the illusion of even greater resolution.

14.3.1 Photographic Emulation Filters

Certain shot setups and certain lighting effects require different photographic filters, or chemical processes. Many of these filters and processes can be successfully replicated digitally. Some can be emulated to some degree, and some are impossible to achieve without hiring a team of visual effects artists (and even then, they may not be possible). If you know that you definitely want a specific effect, then it’s probably a good idea to shoot it that way on the set. If you want more creative control later, or it will take too long to set up the shot, then it’s probably best to assume that you’re going to do it later. But it’s always best to get the post-production facility to provide test examples of the kinds of effects you want to achieve, so that you can make a more informed decision.

images

Figure 14–2   Serious Magic’s DV Rack offers a vast array of video tools that can be accessed with a single laptop

It’s also worth noting that many digital filters can be used with extreme parameters, or in conjunction with others, to create altogether new cinematographic effects, such as applying a barrel distortion effect and then a star-burst effect. It’s also worth noting that some filters are impossible to replicate digitally. For example, a polarizing filter works with incoming light (i.e., filtering out light of certain polarity), which is impossible to reconstruct from a photographed image.

images

Figure 14–3   A polarizing filter can be used to polarize the incoming light in an image (see also the Color Insert)

images

Figure 14–4   Without using the filter when shooting, it’s impossible to re-create the effect of polarizing the light (see also the Color Insert)

The Appendix contains a list of common digital filters and their availability and ease of use when compared to practical filters.

14.3.2 Panoramic Images

Panoramic images cover a wide field of view. There’s a very subtle difference between a panoramic image and one that simply has a wide aspect ratio (e.g., anamorphically squeezed images). Panoramic images might have an 180-degree field of view, whereas a wide aspect ratio image has a much narrower horizontal field of view, and the vertical field of view is narrower still.

Panoramic images are created either by using an optical system with a very wide-angle lens or by using multiple cameras. The problem with using an optical system is that the images suffer from optical distortion. It’s possible to correct the distortion, but the result is often a poor-quality image. When using multiple cameras, they are positioned so that each overlaps the others by some amount. The shots are then combined digitally to form a complete image, correcting for differences in exposure and scale. This technique has been used for a long time in digital photography, but it can be applied to moving pictures as well.

14.3.3 Stereoscopy

“Stereoscopy” (or “stereoscopic imaging”) is a photographic method that creates the illusion of a 3D image. When you see a 3D image, your left eye receives one image, while your right receives another image, taken from a slightly different viewpoint. Many different techniques can be used to create stereoscopic images, but the most practical one for moving pictures is recording a scene with to two identical cameras positioned slightly apart (or using a special stereoscopic camera that comprises two lenses).3

images

Figure 14–5   Multiple images can be combined digitally to form a panorama

Presenting the image to the viewer is somewhat trickier. The best results are obtained when each member of the audience wears special goggles that project the images directly into their eyes. A much cheaper method is to rely on “anaglyphs.” Anaglyphs are composite images, with one image put through a red filter, and the other through a green or blue one. The viewer then wears eyeglasses that have a red filter over one eye and a green filter over the other, corresponding to each viewpoint’s color. Although the eyeglasses are fairly cheap and the composite images are easy to generate, especially in a digital environment, the viewer sees a monochromatic image.

An alternative option is to use polarized light, where each image is separately projected onto a screen, using two projectors, each of which polarizes the light in a different way. The viewer wears eyeglasses with polar filters over each eye that block the light from the projector. A new advancement of this technique is to use a special cellophane layer over a laptop or cell phone screen to separate the screen into two halves and polarize each half separately.

images

Figure 14–6   A left/right pair of images can be combined to form a 3D anaglyph (see also Color Insert)

Widescreen Versus Fullscreen

One consideration that you might have to make during preproduction is whether to shoot material to a “widescreen” format (e.g., 16:9 or 2.35:1 aspect ratio) or to shoot to a “fullscreen”format (4:3 aspect ratio). From a cinematography perspective, the decision affects the shot composition and a whole host of other choices that you will make. But within the digital intermediate, the distinction is somewhat irrelevant. Widescreen images can be cropped to create fullscreen ones, and vice versa. It’s important to use the option that makes best use of the picture area. If you shoot a widescreen format on 35mm film, for example, it’s best to compose the shot at the aspect ratio you desire, but you might as well expose the top and bottom of the image because doing so has no disadvantages. The extra image information you get by making these exposures may be useful in other ways, particularly if the production has to be panned and scanned to produce a widescreen version later on.

Clearly, one deciding factor is the intended output format. People with widescreen televisions prefer to watch fullscreen content cropped top and bottom rather than letterboxed left and right, and people with traditional televisions prefer to watch widescreen productions cropped left and right rather than letterboxed top and bottom. For this reason it’s important to consider the widescreen composition within a fullscreen frame, and vice versa—a process referred to as “shoot and protect.”

images

Figure 14–7   Shooting in 16:9 but protecting the 4:3 area. © 2005 Andrew Francis

images

Figure 14–8   Shooting in 4:3 but protecting the 16:9 area. © 2005 Andrew Francis

14.3.4 Depth Recording

Without a doubt, the single most important advancement to digital production will be the ability to capture depth (i.e., distance from the lens) information from the camera. This capability will open up a multitude of possibilities, as well as make compositing much easier. Elements can be isolated far more intelligently than with the existing methods of rotoscoping, keying, and tracking. It will be possible to enhance the sense of parallax (i.e., the illusion of distant objects moving slower than closer ones) and will provide a new range of effects. Atmospheric effects such as fog, lighting, and shadows can be applied much easier with more convincing results than by using existing methods.

At the present time, there’s no reliable system for capturing a scene’s depth information. Part of the problem is that depth must be captured through the same lens that’s used to photograph the scene to ensure that the depth information corresponds to the captured image. The capturing of depth information must occur at the same moment as the image is photographed; otherwise, motion artifacts are introduced. Further, for best results, the resolution of the depth information has to be almost equal to the image resolution.

With photography, it’s very difficult to record depth information to this degree. The closest method to achieving it is recording stereoscopic images and then processing them, using numerous mathematical operations to extract depth information for each frame.

With computer-generated 3D images, the depth information is already available and can be encoded as an extra channel for later extraction. For example, Autodesk’s 3D Studio Max (www.discreet.com) software can encode such depth information in the image, which can then later be extracted and used in conjunction with certain effects, as seen in Chapter 10.

14.3.5 Virtual Sets

Chroma-key backdrops have been used in productions for many years. A foreground element (such as an actor or object) is positioned in front of a background of solid color (such as a blue screen or green screen) and photographed. Special chroma-key software or hardware is then used to remove the background, which makes it possible to position the foreground elements in front of completely different footage. This process is easier to describe than to do, however; creating an end result that looks natural requires a great deal of planning, and skilled compositors are needed to seamlessly blend the background and foreground elements.

This relatively simple concept can be expanded dramatically to a “virtual set” paradigm, whereby the entire scene, except for foreground elements, is created artificially (either from stock footage or computer-generated material). Such virtual sets may be less expensive to film than hiring real ones, and virtual sets are especially popular with low-budget or corporate productions. Other films, such as Sky Captain, The World of Tomorrow, and Sin City, use virtual sets to achieve more stylistic control. For example, Sin City filmed certain scenes in front of a green screen so foreground elements would have a colorful appearance, while the backgrounds could look more film noir, as they were drawn in the original comic book series.

Shooting Chroma Key

Chroma key is traditionally the domain of the visual effects department; however, it can be a useful tool to any filmmaker and is quite simple to do properly. More adventurous cinematographers might choose to integrate chroma-key elements into a scene for the purpose of adding grading effects later on. For instance, painting a wall in bright green allows easy chromakeying of the wall later on, and therefore, any color can be assigned to it during grading.

When shooting for chroma key, you are separating foreground elements (those elements, such as the actors, you want to keep in the shot) from the background (the area that effectively becomes transparent). Successful shooting for chroma can be achieved by following a few simple rules: Any bright color can serve as a “key.” It’s a myth that blue or green make the best chroma-key colors; most digital chroma-keyers will accept any color as a key.

  • Choose a key color that doesn’t appear in any foreground elements to any degree. One reason that chroma keys tend to be blue or green is that skin tones contain a lot of red coloring but very little blue or green.

  • Light the background as evenly and strongly as possible (but expose as if you were lighting foreground elements). The lighting should always be objectively verified, for example, with alight meter. Similarly, no shadows should be on the background.

  • The physical distance between the chroma key and the foreground elements should be as great as possible. This distance will reduce the effects of “spill” (e.g., light reflected from the chroma key reflects off the foreground elements, contaminating them with colored light).

  • All foreground elements should be as sharp as possible to ensure that the chroma-keyer can extract a hard edge later; otherwise, the foreground elements will have a colored edge, contaminated by the chroma key’s color. You may have to select an exposure and a lens combination that affords a high depth of field.

  • Use an imaging device with high resolving power to help produce a good, hard edge on finely detailed areas, such as hair. It is also vital to use an imaging device that does not perform any color compression on the image.

  • Avoid any reflective material in the foreground because this material may pick up reflected light from the chroma key.

  • Compose the foreground elements in the center of the shot, so they’re completely surrounded by the chroma key. Doing so will provide greater flexibility when compositing the shot, which will allow the foreground elements to be repositioned anywhere on the screen, except where a foreground object crosses the edge of the screen when shooting.

  • Be aware of the usable picture area. In a typical chroma-key shot, you can imagine an area, such as a square, enveloping the desired elements. Everything in this area should be surrounded by the chroma key (and no foreground elements should stray from this area). Anything outside this area can be ignored (e.g., bits of scaffolding or crew members) because they will be cropped out.

  • The camera should not show any movement. Integrating camera movement into a chroma-key shot invariably involves using a motion-control camera or, at the very least, tracking markers. Otherwise, the shot becomes subject to the effects of parallax.

  • Ultimately, remember that you want a solid area of a single color (i.e., “single color” is defined much more stringently than being “perceptually the same”).

14.4 Digital Post-Production

This book has covered many of the techniques that are frequently used in all sorts of digital post-production environments, from the application of effects and titles to color grading. But there are more techniques that haven’t been fully exploited.

14.4.1 Image-Based Lighting

In 3D computer animation, one of the ultimate goals is to create a “photo-realistic” rendering of a scene, one with accurate lighting and surface interaction. One way to achieve this is to use a physical model for the interaction of light on surfaces, such as “radiosity.” With certain implementations of this technique, it’s possible to use an image of the lighting in a real-world scene (for example, an HDR image of a reflective sphere within a scene), which then becomes the basis for a “map” of the scene’s lighting.

With this approach, different images can be used within the same scene to create different effects. For example, it’s possible to use a lighting map obtained from a nightclub, or one from a rain forest, to produce different lighting effects in the scene. At the present time, this functionality is rarely seen in the digital intermediate environment, but perhaps one day, it will be possible to just load in images of lit environments and apply them to shot footage to quickly generate appearances for the footage.

Making Video Look (More) Like Film

No technique can make video footage, especially formats such as DV, look as the scene would look if it were originally shot on film. The best you can hope for is the scene looking as if were shot on film and then telecined to video. The distinction is important because the majority of video formats are so limited in such factors as dynamic range and color space (not to mention resolution), that they can’t possibly compete with film.

As always, the best option is to shoot whenever possible on film in the first place. If it’s not possible, you can shoot a scene with a video camera and then run processes on it to make it look more like film.

Several so-called “film look” software packages may work but are either so simple that they can’t be used for every situation (or they look artificial), or are too complicated to be used to optimum effect.

The following tips serve as a starting point for making your video look more like film:

  • Choose certain options during shooting. Make sure the scene is well lit and properly exposed. Turn off automatic exposure and automatic focus options because they aren’t normally available on film cameras, and sudden changes in exposure or focus look artificial. Disable any digital filtering options (e.g., digital sharpening). Open the aperture as wide as possible. Video cameras require less light than film to be properly exposed, meaning that film footage tends to have a much shallower depth of field than the equivalent video image. To compensate for a wider aperture, the video camera has to increase the shutter speed, producing less motion blur on each frame, although it can be simulated later.

  • Take reference stills of the scene using a photographic camera with the same settings that are on the video camera. The developed pictures can help match colors later.

  • Capture the footage to a set of frames, using an uncompressed, 10-bit format.

  • Deinterlace the footage according to the requirements of the video system.

  • Color-grade the footage, matching the digital image to the reference photograph.

  • Filter the result to remove noise, sharpen edges, and add grain.

  • Speed-change the result to 24fps. If possible, use a method that combines a speed change with motion blurring.

  • Stabilize the result if necessary.

14.4.2 Virtualized Reality

“Virtualized reality” is a method of repositioning cameras after footage has been shot. The premise is that you film a scene using numerous fixed (and identical) cameras. Later, the data from each camera can be collated and interpolated (in terms of space and time) to re-create the position of a camera at any time. Using such a system is similar to working with 3D animated scenes, where cameras can move to any position at any time, following paths that are impractical or even impossible to follow with real cameras. Currently, the technology is in its infancy, although it was used to a limited degree in filming the 2001 Superbowl. In this production, a number of carefully positioned cameras were used to create dynamic replays across the pitch. Similar effects have been created in films such as The Matrix trilogy, which used multiple cameras firing in sequence to create a camera move in very slow motion (a technique also referred to as “timeslicing”). All virtualized reality shots require careful planning and elaborate setups, and thus they are mainly for special-purpose effects. However, as the technology develops, we may see more productions take advantage of the benefits, perhaps to the degree that cranes and tracks are needed less during filming.

The Line Between Visual Effects and the Digital Intermediate

It’s important to remember that the digital intermediate team generally doesn’t share the same skill set (or equipment) as dedicated visual effects artists. With time, the definition of “visual effect” (and therefore the requirement of a specifically trained artist to implement it) will be more complex. In addition, the expectations of the digital intermediate pipeline’s potential will also be more complex. As noted earlier, the current trend is that colorists are becoming more knowledgeable of editing operations, so that they can make on-the-spot adjustments during grading, if needed. Another trend is editors knowing more about visual effects, from compositing elements together, to manipulating and animating 3D data. So the questions are: what constitutes a visual effect requiring a dedicated team outside of the digital intermediate environment, and what should be expected from digital intermediate artists?

First, a team of visual effects artists are required to create highly complicated, and/or time-consuming visual effects. More often than not, it depends upon the discretion of the facility to define these boundaries, but in general, the following rules of thumb determine what constitutes a visual effects shot:

  • Creating from scratch. This includes inserting new objects into a scene; it also includes replacing (or even “rebuilding”) an element of a shot, such as repairing heavy damage to a range of frames (e.g., a tear in a piece of film).

  • Character animation. Animating in 2D or 3D to imbue an element with a sense of “personality.”

  • Complicated tracking effects. Tracking moving features can vary in complexity. However, trying to track subtle movements, or partially or wholly obscured features, or attempting to replicate effects such as motion blur, are best left to specialized compositors.

  • Changing a fundamental characteristic of a shot. For instance, altering the motion of a moving object independent of other elements (or adjusting the motion of the camera) or changing the composition of elements in a shot.

14.5 Shooting for the Digital Intermediate

In general, shooting for a digital intermediate pipeline is analogous to shooting in other ways. However, a number of techniques can be used to maximize the efficiency of the process down the line with the current, commonly-available tools.

  • Shoot at the highest possible quality. The quality of the source material translates directly into the quality of the final production. Even VHS video masters look better when they originate from 35mm film material rather than VHS footage.

  • Maximize the contrast ratio of every scene. Doing so ensures more data to work with during the digital intermediate process.

  • Get as much color onscreen as possible. Colors can be changed to a degree during digital grading, but having strong colors in original footage makes it easier to pull “keys” for different elements to make adjustments. Be aware of “spill” from overly bright colors, which may reflect onto other surfaces. Even if the shot is destined to be a black-and-white image, shooting color actually provides greater control over the look of the final image because of the ability to selectively mix the red, green, and blue color channels individually to provide a monochrome image.

  • Concentrate on the scene’s action. The performance is one of the most critical elements of a production, and it’s also one of the only elements that can’t be adjusted during post-production.

  • Get correct exposure and focus. Good exposure translates into higher-quality source images and reduces the level of noise in the final output. Focusing can’t be easily corrected; although digitalsharpening techniques are available, they don’t compare to a correctly focused image and, in some cases, do more harm than good. If a shot is destined to be out of focus, sometimes it’s best to shoot in focus and use digital defocusing tools later.

  • Avoid the use of photographic filters. Many filters, particularly color ones, can be replicated digitally, although some filters (such as polarizing filters) have no digital equivalent. Unnecessary use of photographic filters may limit options later.

  • Keep shots steady. Although digital-stabilization processes can correct a little bit of wobble in the footage, they degrade the image to some degree and, at the very least, result in some cropping of the image.

  • Fill the maximum shooting area. Even when composing shot for a specific region, the extra picture information outside of the “active” picture area can be used for digital processes.

  • Keep material in good condition. Although digital restoration is very sophisticated, correcting faults such as dust, scratches, and video dropout is a very laborious process.

  • Shoot at a high frame rate to provide more options. With more frames in the source footage, you have more options for retiming, restoration, and even interactive control of effects, such as motion blur, during post-production. Be warned though: a higher frame rate directly equates to higher production costs.

  • Keep it wide. It’s possible to zoom into shots during post-production to emulate a narrower camera angle but not a wider one. If you have any doubt about the desired focal length of a shot, aim to shoot wider rather than narrower. Beware that digital zooms only work up to a point and can sometimes result in larger film grain and a loss of sharpness. Also, remember to “shoot and protect” where needed (see the preceding section).

14.6 Summary

Currently, the digital intermediate process offers a number of possibilities to the filmmaker. For those comfortable working with film, it offers benefits in editing, effects, and color grading. For those working with video, it provides a complementary process, providing highquality duplication and a streamlined work flow. And for the more experimental filmmaker, many as yet untapped techniques are available for filming in new and unique ways, imbuing a production with stylization that would be very difficult to achieve through other means and providing more options (and perhaps more time) for experimentation after shooting has completed.

A number of pitfalls must be avoided throughout the digital intermediate process, but knowledge of where things can go wrong and why some things are done the way they are, combined with careful planning, can result in higher-quality, more controllable, and possibly more stylized results. To paraphrase one of the digital intermediate producers I’ve worked with, “The digital intermediate isn’t just a process; it’s a complementary art.”

1 This phenomenon has been witnessed on several occasions; actors often give better performances on the physical set than in front of a green screen.

2 Most digital-imaging systems record to removable digital media at present and frequently must be replaced.

3 The separation distance between the two lenses determines the apparent depth of the scene when viewed, with the optimum result obtained when the distance is 1/30 of the distance of the closest object to the lens.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset