10

Digital Effects and Titles

The previous chapter dealt with the various ways to repair damage sustained by various formats, as well as digital methods for subjectively improving the perceptual quality of footage. This chapter focuses on methods for adjusting images using various digital effects, as well as adding titles, watermarks, and logos to the footage prior to output.1

10.1 Optical Effects

The simplest and most common type of effect used is the “optical” effect.2 This covers a variety of processes used to reposition and retime and to apply transitions to finished footage.

Many of these optical effects can be applied at the push of a button in the conforming or grading system, or are run as separate processes that generate new material to be brought into the conforming system. None of these processes take much time to process, though much time may be spent finessing them to achieve exactly the desired effect.

10.1.1 Flips and Flops

Sometimes it’s necessary to reverse images in a sequence, so that, for example, an actor looks to the left instead of to the right. Problems such as these may be solved simply by “flipping” a shot, turning it upside-down, or “flopping” it, so that the left edge of the frame becomes the right edge, and vice versa.

In a digital environment, images can also be easily rotated in 90-degree increments. Note that rotating an image by 180 degrees is equivalent to flipping and then flopping it.

None of these operations affect the quality of the image, because all the image information remains unmodified; it’s just rearranged.3

10.1.2 Transitions

Transitions between shots, such as dissolves and wipes, require new material to be generated from the source material. Simple transitions can usually be generated by the conforming system (and may be done automatically, provided the necessary parameters are in the conform EDL), but more complex transitions, such as a 3-way dissolve (where the outgoing shot dissolves into a new shot, which in turn dissolves straight away into a third shot), or those that require specific attention, may have to be done using a separate system.

images

Figure 10–1   Flipped, flopped, and flip-flopped images

A dissolve transition simply constitutes fading one shot out while fading another in during a specified duration, and a wipe effect uses the image area to control the amount of each shot that is visible—controls it by a predetermined pattern and applies it over a fixed duration, as covered in Chapter 7.

In a digital system, even these transitions can be modified in a variety of ways to produce nonlinear transition effects. For example, a shot may start to rapidly dissolve into another one and then slow down, maintaining some of the previous shot, while the new shot dominates. A wipe might slowly grow from one corner of the image and accelerate toward the end of the transition duration. Alternatively, a wipe could be combined with a dissolve, so that the dissolve effect is distributed unevenly across the frame. They can also be combined with other effects—for example, a drop shadow.

images

Figure 10–2   An image rotated in 90-degree increments

Many digital intermediate systems can produce many of these nonlinear effects, with many adjustable parameters, although the exact types of modifications vary among digital intermediate pipelines.

Many systems allow the use of effects at every stage of the process. For example, Thomson’s Bones (www.thomsongrassvalley.com) allows effect nodes to be added to any shot, in any order.

10.1.3 Motion Effects

The speed of a shot can be adjusted (or “retimed”) within the digital intermediate pipeline for practical or creative purposes. It can be as simple as speeding up a shot by removing a proportion of the frames (for example doubling the speed of a sequence by removing every other frame), slowing down a shot by repeating a proportion of frames (halving the speed by showing every frame in a sequence twice), or creating a freeze-frame effect, by repeating a specific frame for the desired duration.

images

Figure 10–3   Effects can be applied to transitions, which themselves can be nonlinear

images

Figure 10–4   Some systems, such as Thomson’s Bones, allow effects to be used in conjuction with any other tool

As with transition effects, motion effects can be listed in the conform EDL and may be performed automatically by the conforming system. Even more, nonlinear speed effects are possible with the digital pipeline. For example, the fashionable timing effect often seen in trailers, where a shot runs at high speed before suddenly slowing to a crawl, is possible using motion-interpolation systems that can selectively increase or decrease a shot’s speed, usually by plotting an acceleration or speed graph.

images

Figure 10–5   The original image sequence

images

Figure 10–6   The sequence at double speed, achieved by discarding alternate frames

images

Figure 10–7   The sequence at half speed, achieved by duplicating every other frame

images

Figure 10–8   The sequence with a freeze frame

images

Figure 10–9   A dissolve uses overlapping A-roll and B-roll material

The Use of B-Rolls

The creation of B-rolls, covered in Chapter 7, may be necessary for certain optical effects, particularly those that modify the duration of a shot. Dissolves, for instance, require footage from both the outgoing and incoming shots to cover the duration of the dissolve.

The extra material is usually created as a B-roll, effectively becoming an additional reel as far as the conforming system is concerned. This allows the B-roll to be sent to separate systems that can then generate and render the required effects as a new sequence.

For the sake of continuity, and to reduce the likelihood of errors, it may be necessary to incorporate the rest of the transitioned shots as part of the transition effect, which means that individual shots are only separated by cuts and that the pipeline treats a long sequence of dissolves as a single, long shot. This approach can reduce the chance of errors, such as duplicate frames, being created at the join between the rendered effect and the adjacent material. However, it requires additional space and can take longer to re-render.

An issue often arises concerning the order in which effects and grading are applied. For instance, it’s much more convenient to apply grading to rendered dissolve material, rather than rendering the dissolve using graded material. This is because the dissolves are less likely to require changes, whereas grading tends to undergo constant revision. In addition, several different grades may be needed for different output formats, which then means that multiple rendered dissolves have to be created for each output format.

On the whole, grading can be applied to rendered dissolves by selectively mixing together the two grades on either side of the dissolve and applying that to the dissolved region. However, with certain shots and grading types, this method can create artifacts, where the grading of the incoming shot may be visible on the outgoing shot (or vice versa). In this case, it may be necessary to first apply the appropriate grading to the incoming and outgoing shots and then create the dissolve material from the graded sequences. Note that attempting to regrade, or applying additional grading to a previously graded, rendered image, will degrade the image. To successfully change the grading of an image, the original grade must be adjusted and reapplied, and the dissolve re-rendered.

Occasionally, insufficient B-roll material is available to create a transition at the desired duration. This is especially common with dissolves, where an “empty” or black frame may be included in the offline edit but isn’t noticeable until the sequence is viewed at full quality, where it may be seen as a flash frame. This situation can be remedied by retiming or freezing the B-roll or by adjusting the transition duration.

10.2 Resizing and Repositioning

Another useful feature of the digital intermediate pipeline is the ability to alter the composition of a sequence after it has been shot. This can be achieved through a combination of resizing and repositioning the images. Clearly, this is limited by the picture area of the source footage, as well as the output requirements. In fact, different compositions are often made for different output formats, particularly for different “widescreen” and “fullscreen” releases, which is covered in Chapter 11. These processes can be also animated to create a zoom or panning effect, months after photography has wrapped.

Although repositioning images doesn’t degrade the images (it does, however, crop, or remove the edges of the images), the resizing process does degrade the images to some extent because it relies on interpolation methods to generate new images. Images that have been resized and positioned (or “recomposed”) are less visually sharp and may exhibit artifacts such as aliasing or banding. In addition, for film-originated material, repositioning resizes the grain structure, which is locked to the image content, possibly resulting in continuity problems between scenes.

Camera Shake

One of the uses of digital repositioning tools is to simulate camera shake, a small oscillation of the picture in the frame, creating a similar effect to trying to hold a photograph still when sitting on a bus. This is often used as a creative device, such as to present an enhanced sense of impact for sped up action scenes, or to lend the camera a feeling of weight, for example in a scene with an explosion.

As with many options provided by the digital intermediate, this effect can be produced during filming, but adding it later on instead allows more room for changes and experimentation to get the desired effect.4

10.2.1 Interpolation Methods

Many of the processes in the digital intermediate pipeline, particularly those that alter the spatial content of the images, rely on some form of interpolation. Interpolation is a mathematical process whereby new information is generated to fill gaps in the original information. The process is similar to the effects of stretching a woolly sweater. When the sweater is unstretched, the pattern on it appears as designed. But when the sweater is stretched, the gaps between threads grow bigger, and the pattern becomes distorted. The function of interpolation is simply to try to fill in the gaps between pixels, just as the function of the “tweening” process in animation is to create the frames in between key frames in a sequence.

images

Figure 10–10   Digital images can be resized and repositioned. With some systems, it’s also possible to rotate the images

Digital images use interpolation processes for increasing the size of an image, as well as for decreasing the size of an image. Perhaps one of the most important uses of image interpolation in the digital intermediate pipeline is the video-to-film transfer, where video material is resized to film resolution (requiring an increase in the number of pixels by a factor of 9).

Many different algorithms are used for image interpolation, and some are more useful in certain situations than others. Software such as Shortcut’s Photozoom Professional (www.trulyphotomagic.com) allow images to be resized using specific algorithms. Some of the most common algorithms are described in the following list.5

  • Nearest-neighbor interpolation: Simply duplicates the closest pixels from the original positions to create additional pixels. This method is very fast, although it produces aliasing and edge artifacts, which make it easy to tell it has been resized.

  • Bilinear interpolation: Analyzes a 2 × 2 area of pixels from the original image and then generates new pixels based on averages of the analyzed pixels. This method isn’t as fast to process as the nearestneighbor method, but it produces smoother, more natural results. However, the entire image appears less sharp than other methods and may result in edge artifacts and a loss of edge sharpness.

  • Polynomial interpolation (or quadratic interpolation) methods: This includes, for example, the bicubic method, which analyzes a larger 4 × 4 area of pixels to create each new pixel. Specific algorithms produce slightly different results, typically varying in smoothness and sharpness. This method is slower to process than bilinear methods, though it sometimes may produce better results, particularly along edges. However, where a lot of interpolation is required (e.g., when increasing the size of a small image to a very large image), other methods may be more suitable.

  • Spline interpolation methods: This includes, for example, the b-spline method, which re-creates the pixel information as points on a spline (similar to those used for spline-based computer animation). By measuring new points on the generated curve, in-between values can be determined. This process can be fairly slow, and problems can arise when trying to interpolate areas of high contrast mixed with areas of low contrast.

    images

    Figure 10–11   An image resized to 400% of its original size using nearestneighbor interpolation

    images

    Figure 10–12   An image resized to 400% using bilinear interpolation

    images

    Figure 10–13   An image resized to 400% using bicubic interpolation

  • Frequency-based interpolation methods: This includes, for example, “Fourier” or “Lanczos” interpolation methods, which convert the source image into a set of frequencies, apply various filters to the information, and then convert the frequencies back into a larger image. These methods can be slow to process and can also introduce noise into the image.

  • Fractal interpolation methods: Reconstruct images as fractals—that is, complex mathematical equations. Fractals, by definition, are resolution independent, so they can be rendered at any resolution, although this process may take some time. While this type of processing can create very large images relatively well, it may introduce significant amounts of noise or other artifacts.

  • Adaptive interpolation methods: Combine two or more other interpolation methods, each of which is applied to different parts of the image. One of the drawbacks of many other methods is that they’re applied to the entire image in the same way. In most images, this approach isn’t the best one to take, because some parts of the image (e.g., those that are distant or out of focus) require a smoother result, while others, such as those with a high degree of detail, require a sharper result. Adaptive interpolation uses algorithms to determine which type of interpolation is more suitable for each area, which results in a more pleasing result.

    images

    Figure 10–14   An image resized to 400% using B-spline interpolation

    images

    Figure 10–15   An image resized to 400% using Lanczos interpolation

    images

    Figure 10–16   An image resized to 400% using shortcut’s Hybrid S-spline interpolation method.

Interpolation can occur transparently for many of the systems and processes that use it, in that the operator doesn’t have to worry about using them, (or may not even be able to select which ones are used). For example, if a pan and scan operator recomposes an image, he or she usually wants to work interactively with the image, adjusting the size of the picture intuitively, rather than having to adjust numerous parameters for each shot. Further, most of the research into image interpolation quality is concerned with still images rather than image sequences, so some interpolation methods may be better for still images than moving images. Similarly, most tests are concerned only with the effects of increasing image resolution, although interpolation must also be used to some degree when rotating or decreasing the size of an image. Some interpolation methods may also be better suited to specific operations, such as rotation or warping, than others. Currently, no interpolation method uses interframe sampling to read pixel values from adjacent frames to provide a more accurate interpolation, although this would undoubtedly be a lengthy process.

images

Figure 10–17   An image resized to 400% by re-imaging the source at a higher resolution

Perhaps the most useful way to integrate different interpolation methods into a digital intermediate pipeline is to use a faster interpolation method for display and previewing purposes, and a slower but better quality method during rendering for final output. Ultimately, there is no substitute for imaging at a higher resolution.

10.3 Filters

One of the staples of digital imaging, be it digital photography or digital movie post-production, is the use (or occasional overuse) of digital filters. A filter is a process that is applied to an image, part of an image, or an image sequence to modify the content. Some filters are designed to remove dirt and scratches, others to create halos or glowing effects within the image, and still more to make an image, for example, look as though it was painted by Van Gogh.

Image filters, such as those in GenArt’s Sapphire range of filters (www.genarts.com), can create lighting effects or emulate filters on the camera lens, such as star-shaped highlights. They can be used to reposition the image in a variety of ways, to create a kaleidoscope effect, or to emulate different materials, such as embossed stone or a sketch on a sheet of paper. Procedural filters, such as those in Allegorithmic’s Map | Time product (www.allegorithmic.com), allow various mathematical procedures to be strung together to create a diverse range of effects.

All filters degrade the image in some way, but usually these effects are desired, making the images perceptually better. However, certain filters can create artifacts, particularly when applied to a sequence and viewed in real time, and the output must be carefully checked.

10.3.1 Computer-Generated Material

An increasing number of productions rely on computer-generated (CG) material for certain shots. These are normally supplied by departments or facilities external to the digital intermediate pipeline and may be in the form of complete shots or “elements” to be composited onto existing shots in the program. Invariably, these shots will be in a variety of different file formats and color spaces and may therefore require grading to maintain color continuity. However, certain CG shots may carry certain advantages over filmed footage. For example, 3D CG images may carry “depth-channel” information, which is used to determine the distance of each pixel from the camera. This can then apply additional effects, such as simulated depth of field, lighting effects, and even atmospheric effects such as fog.

3D CG images may also carry automatically generated mattes, for example, to isolate specific objects, which can then be used in conjunction with a number of different processes. These processes generally don’t account for effects such as reflections (note that the reflections in the pyramid shown in Figures 10–20 through 10–22 are not correctly affected by the effects).

images

Figure 10–18   A source image (left) can be passed through a software-filtering process (in this case, GenArt’s Sapphire Effects) to produce a radically different image. Original image © Andrew Francis 2005

images

Figure 10–19   Using procedural filters such as Allegorithmic’s Map|Time, it’s possible to use an original image (top left) as the basis for new images or patterns (see also the Color Insert)

images

Figure 10–20   The original 3D image, saved with depth information

images

Figure 10–21   A 3D blur effect applied to the image simulates a focal point

images

Figure 10–22   A 3D fog effect applied to the image simulates fog in the scene

10.4 Particle Systems

A special type of CG imagery, “particle systems,” can create a variety of effects. Particle systems are a mathematical model of particles (i.e., points), each of which may be assigned various properties, such as color and shape, that can change over time. Particle systems can model a number of different phenomena. For instance, a particle system can model weather patterns, fire, and smoke.

As well as creating effects in their own right, particle systems can also be used to modify other parameters. For example, a particle system can generate a transition effect or control other features, such as text placement. Note that the implementation of particle systems within a digital intermediate pipeline is by no means mandatory, and the capabilities of each pipeline will vary.

The Open FX Project

With all the different digital image-editing systems, there was a need to unify the way that third-party effects software (i.e., plugins) could easily be used with any number of the systems. Such is the idea behind the Open Effects (OFX) system, which has already been adopted by a number of digital intermediate systems manufacturers. More details can be found at openfx.sourceforge.net.

10.5 Text

Several different types of text commonly appear in productions. Titles and subtitles are types of text that appear onscreen, usually to provide translation of dialog or establish a scene’s location or time frame. Traditionally, subtitles to translate dialog are placed in the lower half of the screen, whereas other titles can be placed anywhere. Titles and subtitles typically use simple, static text that either cuts or fades in and out.

Many productions have opening credits, which list some of the people who worked on the production, as well as the title of the film. Although some productions use simple titles for this purpose, many use more complex, animated text with specific styling.

End rollers (or simply credits) provide a more extensive list of individuals who worked on the production, as well as copyright and similar notices. Traditionally, the credits appear at the end of the production as white text scrolling up a black background.

Captions (or closed captions, which are captions that don’t appear as part of the picture), are used for distribution purposes to enable the hearing impaired, for example, to read the words being spoken in the scene, as well as to be informed of important sounds. Captions are normally supplied as raw text (or timed text) data rather than image data and are used with specialized equipment.

Text can be stored digitally in a simple ASCII text file.6 These files tend to be very small (1 million words of text requires approximately 6MB) but don’t carry any information about how the text is to be displayed. Other text file formats, such as extensible markup language (XML) or rich text format (RTF) files, contain parameters for certain stylistic attributes of the text, such as the size and typeface.

However, the majority of text that appears onscreen is usually encoded as part of the image. The reason for this is simple: rendering text directly onto the image guarantees that it appears exactly as intended. Generating text in this way by imaging applications usually allows a greater degree of control, especially in terms of creating a specific style.

10.5.1 Text Styles

Text can appear in many different ways and has several different attributes. The most common attributes are described in the list that follows.

  • Character set. The “character set” is used to distinguish between different alphabets—for example, to differentiate between the Roman character set used for languages such as English and French, and the Cyrillic character set used for Russian, Ukrainian, and Hungarian languages. Each character set also defines other symbols such as punctuation marks and numbers where applicable.

  • Typeface. The typeface (or font) is used to describe the styling of each character. Use of a specific font ensures that each time a particular letter is used, it has the same shape, whereas different fonts render the same characters slightly differently. Some typefaces even provide symbols rather than letters.

  • Size. The size of a specific typeface can be measured using a number of scales. The most common measurements in typography are “points,” which are 1/72 of an inch, and measure the height of the typeface from the top of the highest letter to the bottom of the lowest letter, and the “em space,” which considers the area of the font, rather than just the height. However, such units are less meaningful in a digital intermediate context because the images themselves have no inherent physical size and rely on the output medium to determine size. Therefore, font sizes tend to be expressed in terms of the number of pixels in height from the top of the highest letter to the bottom of the lowest letter.7

    images

    Figure 10–23   Text in Different Fonts

    images

    Figure 10–24   Text in Different Sizes

    images

    Figure 10–25   From top: regular, italic, bold, and bold italic styles

  • Variations. Different typefaces often come in several variations. For example, the thickness or “weight” of each font may be varied, so that there is a thin variant of a particular font, or a “bold” (thicker) typeface. “Italicized” (i.e., slanted) fonts or “condensed” (narrower) fonts also have variants.

  • Kerning. The amount of kerning determines the relative space between each character in a word, the amount of space between each word, and the amount of space between each line (although in many applications, these values are split into three separate parameters). The amount of kerning is usually expressed as a percentage, relative to the area of the font characters.

  • Color Each character may be a specific color. In addition, with many applications, the outline of each letter can be colored independently from the rest of it. In addition, some applications allow these colors to be in the form of a gradient, a CG pattern, or even sourced from another image.

  • Depth. Certain applications create text in 3D to allow the surface to be shaded or to cast shadows. They may also have other properties to define reflectivity or shininess. Specifying a depth parameter determines the virtual thickness of the letters in terms of distance from the lens and is relative to the font size.

images

Figure 10–26   Different kerning can be used to compress or expand the text’s horizontal size

As with all CG material, text generated within the digital intermediate can easily be accompanied with automatic matte generation, so that the text can be isolated for further effects, such as glows, blurs, or semi-transparency.

10.5.2 Text Positioning

Just as important as the visual attributes of the text, the positioning of words can be adjusted to suit specific requirements. This also has practical uses as well as creative ones and is particularly important for various output formats that may compose shots in different ways. Almost all formats require text to be positioned within a “title-safe” margin, to ensure that it’s visible on a wide variety of display devices. (The topic of title-safety regions will be covered in Chapter 12.

The position of text need not be static, it can “roll” (i.e., move up or down the image), “crawl” (move from side to side), or follow complex paths. Further, text can fade in and out and use any of the same transitions used with conformed footage.

Applying text to an image is best done just before final output, because as soon as the text becomes part of the image, any changes that have to be made to the footage also affect the text. In addition, where text moves across an image, it’s advisable to apply some degree of motion blur so that the movement looks smoother and less artificial.

End Rollers

Unfortunately, generating a large amount of text at film resolution is a fairly expensive process. Credits, which can be several minutes long, require many frames to be generated at great expense, even though they’re relatively simple to create, the only requirements being a long list of text that rolls upward.

For the time being at least, it’s often economically better to create credits optically on film and then splice the film into the digital internegative or interpositive, or scan it along with the other footage.

10.6 Watermarks

It’s sometimes desirable to stamp a visible watermark (i.e., an image such as a company logo) across an entire production. This is common practice for many television and Internet broadcasts, for example, which put “bugs” (i.e., small animated logos with the originating channel information) across all broadcasts. Although this process is usually a separate one performed automatically during the broadcast, it sometimes may be necessary to add such logos at the digital intermediate stage. Another use for watermarks is to label certain copies of the production. For example, some productions create advance screening or preview copies of the program prior to final release, and it may be useful to have a special notice or other visible image stamped onto the footage.

images

Figure 10–27   A watermark applied to an image

Visible watermarks may be static, meaning that a single-frame image is applied across a number of frames, or they may be dynamic, meaning that a looping animation lasting a number of frames is applied to the sequence. Either way, the watermark image, or image sequence, may be accompanied by mattes to allow accurate compositing. For the sake of continuity, as with text, watermarks of this type should be applied as a final stage.

Another type of watermark is the “invisible” watermark, which makes very small adjustments to the image data that can later be detected by appropriate software. Invisible watermarks are almost exclusively used as a copyright-protection device. A number of different algorithms are available, and they work by subtly altering the image data to create a digital “signature” (or “fingerprint”) that can be read back from the file at a later time. Watermarks won’t prevent the file from being accessed or modified; all they can do is provide an image’s proof of ownership. However, they can be used for tracking purposes, such as to locate the origin of an illegally obtained copy of the production. Most invisible watermarks are designed to survive transformation. Resizing or blurring an image slightly won’t disrupt the watermark information, and some watermarks are retained even when converting the images to other formats, such as when printed on paper.

One issue of the different digital-watermarking algorithms is their robustness. As the details of many of the algorithms are kept secret, it can be difficult to discover what makes them fail. This is fairly sensible, because if an abundance of information on how to break the watermarks was available, they would quickly become useless. But at the same time, it’s questionable as to how early in the digital intermediate process watermarking should be employed. It may be that if watermarks are added to the images directly after acquisition, subsequent processes such as grading and restoration may inadvertently remove them. The degree to which they degrade the image is also a consideration. As with any operation that degrades images, watermarking is probably best left until the end of the pipeline.

10.7 Summary

A number of digital effects can be used throughout the digital intermediate process to change the appearance of footage. These effects range from subtle effects that are only noticeable on a moving image, to full-blown effects that can’t be replicated practically. Although most are inherently destructive to the image, as with color grading, their use can help to create stunning imagery.

Digital text tools can be used to add words to the images, and the properties of the text can be controlled to a great degree. Finally, it’s also possible to add hidden or visible digital watermarks to an image, to identify the material’s owners.

1 The availability of specific effects, as well as the flexibility to adjust each effect, is determined by the type of software, as well as the specifics of the digital intermediate pipeline. Therefore, each pipeline may not offer all the effects and options listed in this chapter or may offer alternatives to produce the same result. The use of digital effects in the digital pipeline, with the possible exception of optical effects, should usually be considered as an additional expense, typically charged according to the amount and complexity required

2 The word “optical” used here is derived from the original lab processes, using combinations of lights and lenses, that were used to generate these effects on film

3 This may not be the case for images with nonsquare pixels, and it’s definitely not the case for rotations that aren’t exactly 90 degrees.

4 These methods can’t be used to reliably simulate the effects of actually moving a camera through a scene because it won’t provide any sense of perspective.

5 While it isn’t correct to describe certain methods as more accurate than others—after all, interpolation by definition, can’t re-create unavailable information—the fact is that some methods seem to produce more perceptually pleasing results than others.

6 ASCII (American Standard Code for Information Interchange) defines standards for machine-readable hexadecimal codes that represent each letter of the alphabet, numbers, punctuation symbols, and other symbols.

7 Sometimes points are used within a digital environment, although the relationship of points to pixels isn’t standardized, meaning that a font set to 12 points in one applications may have a different size (relative to the underlying image) to the same type settings in another application.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset