9

Retouching and Restoration

In the previous chapter we saw how color grading can be used to subjectively enhance digital images, by balancing tones and creating artistic looks for different kinds of footage. Color grading is also used to repair damaged footage, where colors may have faded over time, or lighting, exposure, or stock problems have caused the incorrect recording of color information.

Damage to images usually affects more than just the color content, however. Archival film stock can suffer from a range of defects, including scratches, dust, mold, and stains. Old video footage might have picture dropout or tracking errors. Even well looked-after, new film and video footage may suffer from any of these effects to some degree. Digital images, though immune to the effects of time, may nevertheless suffer from inherent noise or data corruption that can build up when the image undergoes multiple processes. Fortunately, the digital intermediate pipeline has options for tackling all kinds of image degradation.

9.1 Image Damage

Moving pictures can suffer from two different types of damage: intraframe damage and interframe damage. Intraframe damage can be seen on a single frame of the footage and includes dropout, chemical stains and so on. Interframe (or “persistent”) damage lasts over a number of frames, and may only be visible when the sequence is played at speed. Picture shift, tramlines, and noise are all examples of interframe damage.

Most of the damage done to images is a function of the physical media used to record the images in the first place. Photographic film, being gelatine-based, is vulnerable to different types of fungi, not to mention scratches caused by frequent handling. Video tapes, on the other hand, are made from metal oxides (more commonly known as rust), which have a tendency to flake apart. Even digital images are subject to corruption, caused by failures in the storage devices.

images

Figure 9–1   This badly damaged film image exhibits a number of different problems

9.1.1 Optical Problems

Problems can also be caused by the optical system of the recording device. For example, lens aberrations can cause the picture to be distorted, such as when using a very wide angle lens, which can cause “barrel” distortion (where vertical lines curve outward). Vignettes are caused by a greater level of light “fall off” at the edge of the lens, causing the corners of an image to be darker and of lower resolution than the center. Focusing problems can cause images to appear blurry or “soft,” which may not be intended, and “flare” caused by internal reflection of light on the lens can contaminate an image. However, optical problems are rarely corrected, because doing so is difficult, and to some extent, unnecessary.1

9.1.2 Film Damage

Film material is very difficult to store without exposing it to elements that can degrade it. Every time a reel of film (whether negative or positive) is duplicated or viewed, it becomes subjected to potential damage. Different types of damage affect the picture in different ways, and fixing them requires different solutions. The list that follows describes various kinds of film damage.

images

Figure 9–2   Barrel distortion bends vertical lines outward

images

Figure 9–3   Pincushion distortion bends vertical lines inward

images

Figure 9–4   A vignette is caused by uneven distribution of light across the image, typically darkening the image’s corners

images

Figure 9–5   Out-of-focus images contain little detail

  • Picture shift. Picture shift (or bounce or weave) occurs when adjacent frames are projected with a slight positional change. All film stocks have small perforations (or sprockets) along the edge to enable the correct alignment of each frame before it’s projected. But sometimes the sprockets may be damaged, and splices between frames may not be properly aligned. The result is that a sequence of images may not be perfectly positioned (i.e., lacking proper “registration”), and when viewed at speed, the image may appear to bounce vertically or weave horizontally. Such picture shift is an example of interframe damage, because it can be detected only when viewing a sequence; each frame viewed individually will appear to have no problems, and in fact the content of each image will be intact. This problem can occasionally be resolved by repairing the film damage, such as by rejoining a splice or redoing the sprockets, or it can be fixed digitally, using motionstabilization techniques covered later in this chapter.2

  • Chemical stains. Film development is something of a messy process, requiring a number of different chemicals, and processes requiring accurate timings and measurements. Random factors (as well as poor storage or careless handling) can adversely affect the delicate chemical-development balance, resulting in a variety of different chemical stains, such as “watermarks,” embedded on the film, distorting or obscuring the image underneath. Duplicating film with chemical stains copies the stain along with the underlying image, and the stain thus becomes part of the image.

    Although such problems might occur over a range of frames, they’re intraframe problems and can be fixed on individual frames, either through physical or digital means.

  • Dust. Tiny particles of dust, hair, and other fibers are constantly circulating through the air. Anytime film is handled (or even when it’s just left out in the open), any of these particles can settle on it. Worse, because film is often run through machinery at high speed, static electricity can build up, attracting dust and other fine particles within range to stick to the surface. These tiny particles, when projected onto a large screen, become noticeable, and obscure image details. Even running the frame sequence at speed, the particles are a noticeable distraction, and a high volume of dust on a reel of film may render it unwatchable. As with chemical stains, dust is intraframe damage, and duplicating the film results in the defects becoming part of the copied image. During projection, dust that was on the negative usually shows up as white specks, whereas dust on a positive shows up black.

    Dust can be removed from film simply by cleaning it—for example, by using a dedicated ultrasonic film cleaner. During duplication, “wet gate” (or “liquid gate”) printing methods can be used, running the source film through liquid to remove loose dirt. Both automated and manual systems can digitally remove dirt from scanned film. Note that “photographed dust,” such as dust on the camera lens, may run across several frames and be much harder to fix, but these lens imperfections and others aren’t usually very distracting to the viewer.

  • Scratches. The several types of scratches that can occur range from minor, intraframe damage similar to a hair on a frame, to vertical running scratches (or “tramlines”) that cover a range of frames. The majority of film scratches are caused by handling. The surface of film material is rather susceptible to friction, and running it through all kinds of equipment, from cameras to densitometers, can cause it permanent damage. Running scratches in particular are caused by small edges in contact with the film as it’s pulled along, as when grit is in the camera’s film gate. Scratches can be fixed digitally, but it can be a very involved process, particularly when fixing scratches across several frames.

  • Warping. Damage to film, particularly damage caused by heat, can cause it to melt, shrink, stretch, or buckle. Although the surface of the film may be free of scratches, the image will appear to be warped—distorted, stretched, or shrunk in certain areas. Indeed, the effects might be small and not noticeable unless the sequence is viewed at speed. An additional problem is chromatic aberration, where each color-sensitive layer is distorted differently, resulting in color “fringes” in an image.3

    Warped images may be corrected digitally, either by replacing the affected areas or by using selective image-warping techniques (covered later in this chapter) to reverse the damage. Individual color channels can be warped to correct chromatic aberrations.

  • Tears. It’s fairly easy for film to be torn, by being mishandled or due to mechanical failure that excessively stresses the film. Tears usually occur at weak points (e.g., splices), and the damage is usually horizontal and affects only one or two frames. However, in some instances, it can affect an entire range of frames and may be difficult or impossible to fix. Tears can be fixed using splicing techniques, but in most cases, the tear is visible, even when the repair is complete. When working in a digital intermediate environment, torn film has to be scanned in separate parts, and the torn frames can be recombined digitally, either using dust and scratch repair methods or by discarding the bad frames and using motion-interpolation techniques to generate new frames. Digital sequences also require motion-stabilization techniques to compensate for any differences in the scanned picture positioning on either side of the tear.

  • Grain. Film granularity isn’t strictly speaking a type of damage, but it’s an inherent property of the film itself. Different types of film stock, different exposure settings, and the amount of light in the scene all contribute to how much “clumping” of silver ions occurs within the film. This is seen visibly as the grain structure of the image. Larger grains are more visible, while finer grain structure results in greater image resolution. Grain structure in images can be both removed and added digitally.

  • Stock failure. Film may simply fail to record images correctly due to problems with manufacture or development. Effects such as “reciprocity law failure” can produce unexpected results. Colors may reproduce incorrectly, images may be underexposed or overexposed, or the density of the film might be too high or low. Many of these issues may be solved by utilizing chemical procedures or with a combination of digital processes such as color grading (covered in Chapter 8) and digital paint (covered later in this Chapter) .

  • Light contamination. Prior to development, a reel of film is constantly at risk from stray light. Film, by its nature, doesn’t “know” when the image has been recorded, unlike video tape, which records a signal only when the recorder “tells” it to do so. For this reason, any light acting upon the film affects it. Ideally, light acts upon the surface of the film only for the duration that the camera shutter is open, focusing an image on the film for a fraction of a second, before advancing to the next frame. Once the film is developed, light doesn’t affect it.4 However, it’s possible that light might “leak” through onto some or all of the film, and therefore “fogging” it, which results in overexposure of all or part of the film. A small amount of fogging may be recoverable through careful global or selective grading processes, but a great degree of fogging renders the film useless, either requiring the image area to be cropped to salvage intact regions or necessitating reshooting the scene.

Digital Remastraing

One of the most practical applications of the digital intermediate process for film material is the ability to remaster aged footage. An entire industry is devoted to the process of digitally remastering a film, rather than reprinting a damaged negative of an old film. Digital remastering typically involves going through the entire digital intermediate process (although conforming from original negative elements may not be possible, and digital remastering usually involves working from a fine-cut production interpositive or internegative that already has correctly assembled all the elements), regrading it, and performing extensive restoration to damaged parts of the footage. The only pragmatic differences between a digital remaster of an old film and a digital intermediate of a new film is that much less flexibility is possible, in terms of editing and grading options, when working with aged film.

Why Use Film?

With all the potential sources of damage to film, it would appear that it’s a very volatile format, constantly at risk from a multitude of different threats. This may be true for consumers, but the reality of film production is that many safeguards are in place to minimize damage to film. Most of the issues are caused by mishandling or incorrect storage, so most filmmakers go to a lot of trouble to ensure their film is handled carefully. Further, film tends to degrade gradually over time, whereas other formats, such as video, simply reach a point where they either work or don’t, which makes them difficult or even impossible to recover. The majority of film damage can be corrected either chemically or digitally, and most of the damage is relatively simple to fix, especially compared to video. More recently, it has been suggested that due to a lack of standardization, no long-term storage solution exists for either video or data, whereas reels of film are in existence that are several decades old and are completely viewable.

9.1.3 Video Damage

In addition to problems caused by a video camera’s optical system, video systems (and digital cameras) are subject to other problems caused by the CCD elements used to capture images. However, most of these problems vary by camera design, and like some optical defects, they may not require correction.5 Some problems are related to the equipment used to play back a video, rather than being an inherent tape issue. For example, additional noise artifacts may be introduced by low-quality cables used to connect the video recorder to the monitor. Also, playing an NTSC tape in a PAL video system (and vice versa) can produce bizarre results. In such cases, problems can be solved by using the correct combination of video components for playback. Even when the problem is due to the video tape, different video players have different error-correction devices, some of which work better than others, that can compensate for the damage. Video damage may be caused by a number of different factors, but the types of damage are limited. Many of these problems are exclusive to (or at least more pronounced on) analog video formats; however, they may be seen in digital video formats such as HD and DV, which may also exhibit some of the problems seen in digital formats (discussed later in this chapter). The following list describes the types of video damage.

  • Dropout. Tape “dropout” is intraframe damage (usually physical damage, to or resulting from deterioration of the tape) that causes part of the tape’s signal to become unreadable. Part of the image is missing, and it’s replaced either by random pixels, black, or even another part of the image, depending upon the video system. Dropout of this type can be fixed digitally using similar techniques to fixing dust and scratches on scanned film, covered later in this chapter.

  • Persistent dropout. Sometimes dropout occurs in a fixed position across a number of frames. Fixing it digitally requires using the same techniques as used when correcting running scratches on scanned film, covered later in this chapter.

  • Noise. Noise is caused by a variety of factors, and it results in the inaccurate recording of information. For every point of an image (or pixel) that is recorded, the associated level can vary from the actual (or measured) value for a number of technical reasons, depending upon factors such as the ambient temperature and the resistance of equipment’s electronic circuits.

    The result is that individual pixels fluctuate in terms of color or luminosity. This fluctuation might be undetectable on a single frame, but when the fluctuation is viewed as part of a sequence, even minute fluctuations can be distracting or render a sequence unwatchable. One major cause of noise in a video source is generation loss, and so the best preventative measure to combat the effects of noise is to work from original material whenever possible, especially where quality is important. Noise may be introduced any time analog video signals are transported, as well as by gradual deterioration of video tape material. Noise can be corrected by using analog or digital noise-reduction techniques, covered later in this chapter.

  • Tracking. Video signals contain synchronization information that ensures that the picture appears correctly and each frame appears at the right time. In the event of damage to this part of the signal, the picture can become unstable, even when the picture information is intact. VCRs use tracking systems to interpret the synchronization signals in the video, and these systems vary by design and video format. When the tracking signal is damaged, the picture might be distorted, or it may hold still for a number of frames and then “roll,” as tracking is lost between sync pulses. Many VCRs include options to correct for tracking problems, and a number of digital techniques can sometimes correct tracking faults on digitized video footage.

  • Interlacing. Most video formats (with the exception of “progressive scan” formats) are interlaced, which means that each frame is divided into two fields. Each field is recorded in succession, which means that fast-moving objects may exhibit artifacts when viewed as frames. Interlacing can be corrected using one of the deinterlacing processes (covered later in this chapter), or depending upon the project’s output requirements, it may be safe to simply leave the frames interlaced.

  • Recorded transmission errors. Although these errors aren’t strictly caused by the video tape itself, several errors can affect a signal, and when it happens during recording, the errors become part of the recorded picture. “Ghosting” is a problem affecting pictures that are transmitted wirelessly, such as off-air (broadcast) material received by an antenna. Ghosting is caused by the transmission reaching the antenna and then reflecting off a hard surface (such as a tall building) toward the antenna. The reflected signal adds to the original signal, and the slight delay between the transmitted signal reaching the antenna, and the reflection reaching the antenna causes a kind of visual “echo” in the picture, with previous frames superimposed over the current one. Hum is caused by interference (usually magnetic) affecting the video signal in cables. The most common form of hum occurs when a power cable is alongside a video cable. The magnetic field created by the power cable, which is sinusoidal, modulates the video signal, causing rolling bars to appear over the picture. Both of these problems become part of the picture if it’s being recorded, in the same way that using a VCR to record a channel that isn’t properly tuned results in a poor recording, even when played on a different system. These issues can be corrected at the source, and when no other option is available, the impact of problems can be reduced. For instance, it may be possible to re-create the interference pattern embedded in a signal, invert it, and then apply it to the signal. (This technique is also used to clean up digital audio.) However, it’s likely that the visual damage will remain to some extent, resulting in a signal that’s far inferior to the original (transmitted) source material. Ghosting, in particular, is usually impossible to repair when it has been recorded as part of the picture.

  • Compression. Different video formats use different forms of compression to encode video onto tape. Each compression method has associated artifacts that affect the image in certain conditions. For example, the 4:1:1 sampling compression used for DV NTSC video signals occasionally exhibits color “smearing” along the edges of an image’s different colored regions. As with many other forms of video damage, these compression artifacts may be difficult to correct, necessitating the use of digital paint (discussed later in this chapter) and other techniques.

  • Timecode breaks. Timecodes are recorded onto separate parts of the video tape, and timecode information can become damaged in the same way as the image information (or through problems during recording). Damage to the timecode track may cause a “break” in the continuity of the timecode progression. While these breaks don’t affect the picture, they can be problematic when trying to edit the footage. Timecode breaks usually can be repaired fairly simply by regenerating the timecode information from a known starting point (although this approach may require copying all the information onto a new tape).

  • Distortion. Damage or disruption to the video information can distort the picture. Less severe distortion can be corrected using similar techniques as those used to correct warped film images.

  • Clamping. Video has a wide margin to define both the signal’s black points and its white points. Anything that falls below a certain value is considered to be black, and lower values are considered superblack, although they’re still displayed as black. With white values, anything above a certain value is considered to be “peak white.” The advantage of this convention is that it allows room for error—for example when a signal drops uniformly (i.e., the values are decreased by a set amount), it’s possible to simply increase the signal so that the image is restored to match the original. However, it’s still possible for signals to exceed the maximum and minimum limits, causing the signal to be flattened or “clamped.” Any signal that exceeds the black or white points (which can be seen using a vectorscope) can usually be corrected using controls on the VCR (or using digital grading, depending upon the video-capture method). However, signals that have been recorded clamped can’t be corrected.

9.1.4 Digital Image Damage

Many of the factors affecting digital image quality are related to the material or processes that they were sourced from. Scanned film carries across all of the dust and scratches on the surface of the film, making them part of the image. Digital images can be copied without any degradation. However, manipulating digital images subjects them to a range of different problems, many of which can reduce the quality of the image significantly. These problems are described in the following list.

  • Corruption. The only physical problem that affects digital images occurs when the storage device is damaged, making all or part of the data contained within it inaccessible. For small amounts of such data corruption, the effect is similar to video dropout on individual frames and can be treated using similar means. However, a large amount of corruption can destroy entire frames or sequences. In addition, even a small amount of corruption can destroy entire frames that use compression, or entire sequences that are encoded as single-file video streams. Of course, if an uncorrupted copy of the same file exists, it can simply be duplicated to replace the corrupted one.

  • Render errors. Render errors can look similar to data corruption errors on images. They’re caused by miscalculations or other errors during the rendering process. They can usually be corrected by simply re-rendering the image, which should regenerate it.6

  • Clipping. Peak white is achieved when a pixel’s values reach the maximum, and pure black is achieved when the values are the minimum (usually zero). As mentioned previously, video has regions of superblack and peak white to allow a margin of error in the maximum and minimum levels. Most file formats however, don’t have an equivalent region, and values can’t be increased beyond pure white or below pure black (and everything in between is considered a shade of gray). Therefore, in the same situation where the image is darkened overall, black pixels stay black, and nearly black pixels become black. Then when you try to brighten the image to its original brightness, all the pixels are brightened, including those that were originally black. The originally black pixels are said to have been “clipped” (or “crushed”). For the sake of analogy, consider a jelly baby being pressed against a surface. With the video paradigm, the surface is flexible like a net, so the jelly baby can be pushed into it, and the original shape recovered later. With the digital paradigm, the surface is more like a hot stove, and pushing the jelly baby against it destroys part of the original shape, even after it’s lifted off the stove.

    Many digital operations, particularly those color-grading processes that change the luminosity values of an image, can cause clipping. Because clipping is irreversible, its causes should be avoided. Certain high dynamic range file formats have superblack and superwhite regions like video and can be manipulated much more freely.

  • Posterization. Stretching apart an image’s luminance values (e.g., by modifying the contrast) tends to clump values together, causing similar problems to clipping colors. Regions of similar colors become the same, which can result in visible “steps” of color. As with clipping, it’s irreversible but can be avoided by using images with higher bit depths.7 Additionally, some processes can cause such posterization (or “banding”) as a by-product. For example, certain noise-reduction techniques work by averaging groups of pixels so that they become the same color; however, posterization is usually visible only in extreme circumstances or when working with images of low bit depth.

  • Aliasing. By definition, pixels are rectangular. They’re also hardedged, meaning that a pixel can only be one single, solid color. This becomes a problem when trying to represent particular shapes using pixels, notably curves and diagonal lines, as the shape of individual pixels may become visible. When viewed at speed, a sequence of moving shapes may cause pixels to “pop,” suddenly changing color as an edge moves across them. Certain image operations, such as sharpening filters, can also produce aliased images. This can be avoided by working at a higher resolution or by using “subpixel” or “anti-aliased” operations that are internally processed at a higher resolution and then averaged to produce a pixel. Aliasing can also be reduced by using blurring operations, although blurring can degrade or obscure small details in the image.

  • Moiré. Because pixels are arranged in a regularly shaped grid, other regular patterns within the image can create interference. The resulting effect is sometimes seen on television, when people wearing clothing with fine lines move in front of the camera, causing the lines to separate into many colors or wavy lines. The effect is also seen when using a digital scanner to capture half-toned material, such as a magazine picture. Moiré can be reduced using specialized “descreening” techniques, or by rotating or blurring the image, all of which degrades the image somewhat.

  • Compression. Lossy-compressed digital images are subject to the same problems that video compression causes. Many compression artifacts can’t be easily corrected.

  • Noise. Unlike video noise, digital noise isn’t caused by making copies of the image. Digital noise is introduced by precision errors, and it exacerbates other digital artifacts, such as clipping and banding. Digital noise can be prevented to some degree by working with high bit-depth images, and the effects of noise can be diminished using digital noise-reduction techniques, which are covered later in this chapter.

9.2 Digital Repair Techniques

The digital-restoration industry has been around for around two decades, although it has primarily been used to restore or enhance (or airbrush) individual scanned photographs. Certain techniques that are used to restore photographs can also be used within a digital intermediate pipeline, along with other techniques borrowed from the visual effects industry. Though some operations are sophisticated enough to allow for automation, many require a skilled operator (or team of operators) to achieve the best results. Some of the popular processes are highly developed, and different approaches are offered by different software manufacturers. Others are more theoretical but may be simple to implement.

The Case for Uncompressed Data

Everything in this chapter assumes the source material isn’t compressed using either lossy or lossless compression. Even a small amount of file corruption is enough to prevent the entire contents of the file being readable, let alone repairable. With uncompressed data, the majority of the picture content is intact, and the damaged areas can be fixed using other techniques.

Furthermore, lossy-compressed files may have discarded information crucial to repairing the files. Almost all of the operations listed in this chapter to repair certain problems work better with more image information available, even where the additional information is imperceptible. For example, sharpening a JPEGcompressed file can also sharpen the compression artifacts.

For these reasons alone, a pipeline favoring uncompressed file formats is preferable to one using any kind of compression, particularly when you intend to use restoration processes. However, in some situations, there’s much less advantage to storing images without compression. For example, in a DV pipeline, the material is lossy compressed at the source; no uncompressed version is available. This is also true for material sourced from analog video formats that use compression. Converting files that have been lossy compressed to uncompressed ones still causes compression artifacts (e.g., those that occur when using sharpening filters), which in turn causes people to think that leaving the files compressed makes no difference, especially when the final output format will also be compressed. But along with the fact that compressed files are more susceptible to corruption, converting files to uncompressed formats (even if only for the duration of the digital intermediate process) makes sense because doing so avoids further compression errors being introduced into the images. Lossy compression is reapplied every time the file is saved. If no changes have been made to the picture content, the saved information should be identical to the original, because the compression will be exactly the same as before. But changing the picture content in any way, particularly by color grading or compositing layers, forces the compression algorithm to reevaluate the data, discarding yet more information and degrading the image even further. In this way, it may help to think in terms of “generation loss” with lossy compressed images. Every time a compressed file is modified and saved, it effectively loses a generation. This isn’t true for compressed files that have been saved uncompressed, because although the original image is degraded, it won’t suffer from additional artifacts. Similarly, many people think that increasing a file’s bit depth has no advantages, but they’re wrong for the same reason. Increasing the bit depth reduces the likelihood of introducing errors in subsequent operations, even though increasing the bit depth doesn’t in any way improve the existing information.

Ultimately, the practical reasons for sticking to a single compressed format may far outweigh the quality issues. Likewise, no value probably can be gained in converting files that won’t undergo many restoration processes. For example, for news production pipelines, which constantly process a high volume of footage from a number of (mostly video-based) sources (much of which will never undergo any restoration processes), speed, performance, and storage capacity are much more important factors than quality. Thus, it makes sense to keep everything using a single, compressed format that is close to the perceptible quality of the original source material.

9.2.1 Image Averaging

Probably one of the simplest and most effective methods (not to mention underused) for eliminating or reducing problems is to use imageaveraging techniques. The basic idea is that you have multiple copies of the same image, each copy with different degrees of damage. For example, when scanning old film, or digitizing tapes, several prints or tape copies of the same footage may be produced. Each copy is digitized and stored separately. The averaging process involves simply averaging each pixel’s values at each point on the image across each copy. The idea is that deviations of specific pixels are caused by a random error—for example, noise or a speck of dust—and these errors are “averaged out” and don’t appear in the final image. The more different copies are available, the better it works. “Weighted” averaging methods may take this approach further, analyzing the differences and biasing the average to the likely “correct” value for each pixel. This may be necessary to counter defects, such as dust on scanned film, that create such strong differences in luminosity that the averaged result still is distorted by the dust on a single image, even with numerous copies to sample from.

Averaging methods work for many different problems. For example, it can be used along with a beam splitter attached to a film camera during a shoot (a beam splitter splits the incoming light and sends it to different locations, such as several reels of negative) to compensate for differences in grain structure and processing variations of the negative, because each copy has had identical picture information sent to it, and any differences are due to issues such as these.8 Furthermore, it can be used fairly easily to correct problems in a specific system. For example, if you have a video tape that has very little inherent noise, but the VCR and associated cables introduce noise to the signal during digitization, the tape can be digitized several times. The multiple copies are averaged, which reduces or even eliminates the additional noise (but the original recording retains the noise it contained). Similarly, a single reel of film can be scanned multiple times to eliminate the noise in the scanner’s CCD element, as well as eliminating loose dust on the surface that changes position during each scan. (This process, however, won’t eliminate problems such as scratches or mold, which will be present in every scan.)

The drawbacks to this process are largely practical. For example, there often isn’t enough time to scan a reel of film or digitize a tape several times, particularly if the cost of doing so is more expensive than using other methods of correction. It may also expose the material to more damage—scanning a reel of film multiple times may add more scratches to the film and even risk tearing it. Secondly, there may not be multiple copies of the material available, and the copy that does exist might have significant damage that is inherent in the picture.

images

Figure 9–6   Multiple noisy images can be averaged to reduce or eliminate the overall noise

One of the other shortcomings is that this method can’t correct problems that occur in specific positions, that exist in the original, or where the spatial information of each copy is different (such as with film bounce, badly tracked video, or film scanned without using pinregistration), without first correcting those problems.

9.2.2 Digital Paint

The concept of airbrushing photos has been around for a long time. It involves spraying a fine layer of paint onto an image to correct problem areas or enhance or otherwise alter some aspect of the picture. Most photographic retouchers now use a digital platform to perform the same techniques, because it affords a greater level of accuracy and flexibility. Rather than using an airbrush and paint, “digital paint” tools are used to perform the same (as well as many more) functions.

All of the tools used with still images can also be used on footage within a digital intermediate pipeline. Errors on frames can be corrected using “cloning” tools (which copy one part of an image onto another), or by applying solid colors, patterns, or gradients directly onto an image using shapes or paint-strokes. Even more options are available when working with footage because information can be obtained from any other frame. This allows, among other things, parts of the image from a good frame to be cloned onto a damaged frame, which can be one of the fastest and most accurate methods for repairing a number of problems. Different systems will inevitably offer different tool sets, usually designed for specific purposes.

There are caveats though. Use of digital paint on a sequence of frames can create interframe problems that are only visible when played at speed. Photographic retouchers have the luxury of making major changes to an image that won’t be noticed (except in the instance when someone has seen the original image). When working with moving footage however, changes must be replicated exactly through the sequence. For example, to repair a shot containing a large vertical scratch running across a series of frames, you can individually paint each frame. But when played back, slight variations in the paint strokes will show up, causing a flickering artifact that is similar to watching hand-drawn animation. On the other hand, painting strokes that are exactly the same on each frame may result in the stroke appearing like a lens defect—a smudge of paint on the lens (which it effectively is). Successful use of digital paint, therefore, requires either the undetectable use of small amounts on a sequence’s single frames (usually cloning from adjacent frames), or the use of animatable vector-based tools across a sequence, so that no random deviations are produced.

Digital paint is one of the few processes that isn’t inherently degrading; any image that has been painted doesn’t suffer any loss of quality, but of course, the quality of the painted area depends entirely upon the skill of the operator.

9.2.3 Dust-Busting

Removing dust, mold, scratches, and the like from scanned film is such an involved process that a number of different approaches are used. For many film-based digital intermediate pipelines, this process can be one of the most time-consuming and expensive, even though it at first may seem fairly insignificant. This process can also be used to correct dropout on video material, but the vast majority of video footage will contain far less dropout than the level of dust present on the equivalent film footage.

The presence of dust and scratches are generally accepted by a cinema audience, probably because there is a limit to how clean each individual print can be after being constantly handled. But on other versions, such as video or DVD, even a small amount of dust on the image can be distracting, especially because material directly shot on video won’t exhibit this problem.

Spotting dust is the first part of the problem; it requires a trained pair of eyes (or several pairs) watching the footage in slow motion to find it. Many digital intermediate facilities run copies of the footage to be checked onto HD tapes, for the sake of convenience, from which a list (or even EDL) may be drawn up. Although HD resolution is somewhat lower than film (especially 4k scans), at higher resolutions, the dust is only better resolved, not more numerous. Any defects that can’t be detected at HD resolution are usually imperceptible to the audience.9

Fixing the digital images derived directly from the scans is both difficult and extremely time-consuming. Every single frame of film must be checked and corrected. A typical, 90-minute film will consist of around 130,000 individual frames. On top of that, B-rolls and handle frames must be considered. Some digital intermediate facilities make a practice of dust-busting every single frame scanned, including those that may not make the final cut, which can mean fixing almost a million frames per film. However, the result is that each image is as close to perfect as possible.

An alternative approach is to fix frames only in the final conform. However, it’s a less flexible approach, requiring careful management and can lead to problems if the material must be reconformed at some point in the future. A similar approach is to dust-bust material as a final step before output, although less time may be available to complete the dust-busting process.

images

Figure 9–7   An effective approach, although time consuming, is to dust-bust film material as soon as it’s scanned and then conform the cleaned images

images

Figure 9–8   An alternative approach is to dust-bust only material that has been conformed, although doing so may complicate the reconforming process.

There are also other reasons for removing dust from scanned negative. Dust that has become part of the scanned image propagates to every print made from that point onward. Each speck of dust in scanned images appears in every projected print (along, of course, with new dust accumulated by successive printing and duplication processes). Because the digital scans are normally used to create the various masters (for video, Internet distribution, and so on), the dust also propagates to every other format created from the digital version. For this reason, it’s always worth eliminating the dust as early in the pipeline as possible.

Fortunately though, two points make the whole process much easier: each frame of an image sequence is usually very similar to the previous one, and defects are usually randomly distributed with the frame area. Even at the relatively slow frame rate of film (24fps), it takes a lot of motion within the scene before a high proportion of the image is different than the previous one. Even panned or tracking shots normally contain areas of the image that are identical to the previous one (although possibly in a different location within the frame).10 Coupled with the low probability of two adjacent frames containing damage in the same place (with the exception of certain types of scratches, which have to be repaired using different techniques), this means that dust can be removed simply by copying (or “cloning”) the same part of the image over the defect from an undamaged frame. For example, in a sequence that contains a speck of dust over an actor’s nose in one frame, the same area of his nose can be cloned across from the previous (or next) frame, which contains no defects. Provided the cloned information is accurately aligned, the sequence should look flawless when played back.

Similarly, motion-interpolation techniques (covered later in this chapter) can be used to analyze frames on either side of the damaged one and generate a new frame. A region of this new frame can be used to replace the damaged area.

Dust is commonly removed using techniques that fall into two broad categories: manual and semi-automatic processes. Manual processes are the simplest, requiring a digital paint operator (or team of operators) to go through each frame and literally “paint out” defects one at a time, typically by cloning picture information from adjacent frames or from elsewhere within the same frame. For example, Idruna’s Speedduster (www.idruna.com) has a simple interface for quickly checking through frame sequences and painting out dirt as soon as it’s spotted.

Automatic systems, such as MTI’s Correct (www.mtifilm.com), require the operator to specify a set of parameters to control how the machine detects dirt. The software looks through each individual frame and compares it to the previous frames, using a variety of motion-estimation techniques and other algorithms (which vary among the different software programs), and unexpected changes in picture and motion information are assumed to be defects. The parameters must usually be set separately for each new shot, because many factors within the images contribute to determining what constitutes dirt. For example, in an outdoor scene, raindrops in the scene might be incorrectly identified by the software as dirt. The software builds a “map” of defects on each frame, and then another process is run to replace all the defects with good image information, either copied from adjacent frames or generated using motion-interpolation techniques.

images

Figure 9–9   Applications such as Idruna’s Speedduster are designed for manually painting such defects as dust

The automated processes have drawbacks, however; they can miss dirt (i.e., create “false negatives”), “repair” parts of the image that it wrongly interprets as dirt (create “false positives”) such as sparkling highlights on a shiny surface, or create image artifacts when repairing a defect. For these reasons, it’s often necessary for a human operator to verify all the changes made by the automated system.11 However, for heavily damaged footage, such as archive film, an automated process can save many hours of work.

images

Figure 9–10   Applications such as MTI’s Correct are designed to automatically remove a number of different defects

Successfully integrating dust-busting into a digital intermediate environment has other practical implications. Fixed images are either produced by overwriting the originals (which means that the changes may be irreversible and difficult to track), or by creating a new set of frames (which effectively doubles the disk space requirement for each shot to be fixed, and may require a separate rendering process, so introducing additional time and potential errors into the process).

One solution is to save a dust-bust layer for each image, containing data for each fix. These files can be losslessly compressed to reduce disk space requirements, and integrated with the original scans at the time of output. Finally, the system could be able to integrate with the other systems, such as the conforming system, so that for example, if the conform operator spots dirt on one of the frames, he or she can flag the defect to be fixed, and the information is instantly transmitted to the dust-busting operator.

A final approach is to store the changes as metadata (similar to how some grading systems store color changes), which are only applied (rendered) during final output. However, these may require significant computer resources to display the fixes prior to output, which can impact the performance of playback.

9.2.4 Scratch Removal

Running scratches and other persistent defects require different methods of correction. With persistent damage, the defects are not randomly distributed, so the odds of finding a “clean” area on an adjacent frame to clone from are very low. For small defects, corrections can sometimes be made by cloning from a similar area elsewhere in the same frame—a technique used to remove dust or dropout in fast-moving scenes.

For other situations, more complex (and typically more timeconsuming) methods are required. The simplest, but still reasonably effective one is to generate clean pixels by interpolating between pixels on either side of the affected area. This method works better for narrow scratches (because less information has to be replaced) but can create artifacts along edges or artifacts that are visible only when the digital video is viewed at speed (particularly in fast-moving scenes). Some automated dust-removal systems have options for repairing running scratches using a combination of this method and motion interpolation, which can avoid artifacts created from fast-moving scenes.

Another method is to repair the defect by rebuilding one frame of the sequence (e.g., by using cloning techniques) and then using this good frame as the source to clone onto all other frames. Again, this technique can create motion artifacts (primarily in slow-moving scenes), but the major problem is in the images’ grain (or noise) structure.

The problem arises because unlike original film material, cloned material doesn’t have a random grain structure. Instead, it has exactly the same grain structure as the source area. This isn’t usually noticeable (unless the image has been cloned so much that the grain structure is effectively eradicated) because the cloned regions themselves are somewhat randomly distributed. However, when cloning is used on a fixed area over a number of frames, the area stands out—because unlike the rest of the image, the grain pattern on the cloned area never changes. The result is that the cloned region looks like it was pasted on top of the moving images (which is essentially exactly what has happened). One solution to this problem is to use tools to strip all the grain from the original images prior to painting and then add grain back onto the image after cloning (see the sidebar “Texture Recovery,” later in this chapter).

For certain scenes, however, the scratch might be so complicated as to require a visual effects artist to completely rebuild the entire sequence, typically by using wire and “rig-removal” techniques. This results in a highly accurate restoration of the scene but is a costly and timeconsuming process. It’s usually considered only as a last resort.

Physical Restoration

Sometimes the most effective method for removing dirt from film is simply to physically clean it prior to scanning it. In fact, many digital intermediate facilities recommend that all reels of film submitted for scanning are run through an ultrasonic cleaner first. This will remove much of the dirt, drastically reducing the time required to fix the digital scan files.

Additionally, “wet-gate” film scanners can be used to compensate for scratches on film. A film scratch consists of a depression on the surface of the film, which prevents light from being reflected or transmitted evenly across the film surface, distorting the image. Wet-gate scanners add a liquid layer (such as perchloroethylene, which has the same refractive properties as the film’s gelatine) to the film while scanning, which effectively fills in the depressions. However, they don’t necessarily remove dirt, which may instead float on the liquid layer while being scanned, causing long streaks to be recorded in the image. In addition, deep scratches that affect the film’s dye layer still have to be repaired separately.

A new technique currently being developed is to scan an infrared image for every frame scanned. This creates a matte that can be used to highlight defects on the surface of the film. Where no dust particles exist, the infrared light passes through (or is reflected back from) the film, whereas the presence of dust or other physical defects absorbs or scatters the light. These mattes might then be fed into an automated system that can quickly repair the highlighted areas.

9.2.5 Sharpening

A multitude of factors can contribute to the relative “softness” (or blurriness) of an image. For starters, the original footage might not have been well lit, properly exposed, or even properly focused. In addition, wear and tear of the material robs it of its former sharpness. Images on an old video tape may not have their former crispness, and film dyes might gradually spread or fade. Generation loss invariably reduces the sharpness of an image, as each subsequent copy introduces more defects and less accurate reproductions.

Digital processes can also reduce the sharpness of an image. In fact, with most digital images, softness is usually the result of a combination of destructive processes, such as spatial interpolation. This process is used when resizing (either increasing or reducing the image area) or rotating images, for example.12

Sharpening (also known as “aperture correction” or “unsharp masking”) techniques work by re-creating hard edges from blurred regions. Each of the many different sharpening algorithms is suitable for different circumstances. Basically, digital sharpening works by analyzing groups of pixels and then increasing the contrast between pixels it detects lying on an edge. The result is that all the edges in the image look more pronounced and are perceived to be sharper. Digital sharpening techniques are by no means perfect. It’s simply not possibleto reconstruct edges accurately from blurred images, because the information has been destroyed. Even the best sharpening algorithms are based upon estimates of the edges’ locations, and the process of sharpening actually degrades an image. Oversharpening an image can lead to “ringing,” where the edges are so pronounced that they seem to pop out of the image. Some algorithms even modify pixels adjacent to edge pixels to increase the apparent sharpness of each edge.

In the digital intermediate pipeline, sharpening techniques are to be avoided wherever possible, and when they’re used, they should be applied to individual areas rather than entire images. Sharpening settings should be applied to entire shots rather than individual images (or at least, the sharpening effect should fade in and out over a range of frames) to ensure that the results are consistent and don’t produce artifacts when viewed at speed. Also, sharpening should only be applied to the luminosity component of an image, because our eyes perceive details in terms of luminance rather than chromaticity.

It’s important to note that sharpening can’t “refocus” an image. Focusing a camera during a shoot adjusts many properties of the recorded image. Objects within the focal plane are sharp, while objects farther from the focal plane become increasingly blurred. Simply put, refocusing a camera both sharpens certain objects and blurs others. This process is very difficult to reproduce digitally using recorded footage, primarily because no depth information is recorded with the image. However, the blurring aspect, at least, is something that digital systems can do very well.

images

Figure 9–11   Digital sharpening can increase an image’s perceptual sharpness, but it also increases the level of noise

9.2.6 Blurring and Defocusing

Blurring a digital image is done by averaging a group of pixels. As with sharpening, several different algorithms control exactly how the averaging is done, but the net result is that the more blurring that’s applied, the fewer the visible details in an image. As with sharpening, blurring should be applied to a whole frame range rather than individual frames, so the effect isn’t distracting.

Blurring can be considered to be highly accurate, as the process just destroys existing information. However, it doesn’t accurately mimic the effects of defocusing a lens, which is a significantly more complex process than simply averaging the image area.

For example, defocusing a camera lens blurs the image but in such a way that bright regions “bloom” outward. In addition, the shape of this blooming is determined by the characteristics of the lens elements, creating, for example, circular or octagonal spots of light in the image.

Fortunately, several image-processing programs, such as GenArts’s Saphire RackDefocus (www.genarts.com) mimic these effects, allowing scenes to be selectively defocused as required. Again, because digital images contain no depth information, it’s difficult to accurately replicate the exact effects of defocusing a lens, but for most applications, defocusing processes work well.13

9.2.7 Image Warping

Manipulating digital images provides a wealth of options. Images can be cropped, resized, rotated, and panned fairly easily. In addition, it’s also possible to stretch and squash images—that is, resize them in a particular direction. When you usually resize a square image, the resulting image is still square. But if you stretch the image horizontally, the vertical dimensions don’t change, and the resultant image becomes rectangular. Of course, all the features within the image become distorted, but this result may be intended. For example, when working with panoramic images that have been anamorphically squeezed to fit within a narrow image area, it may be necessary to stretch the image so that it looks realistic.

images

Figure 9–12   Digital blurring doesn’t produce the same results as manually defocusing a camera lens, although digital defocusing effects come pretty close (Genart’s Sapphire Rackdefocus was used in this example). Note that manually defocusing a camera merely shifts the plane of focus, which may increase the focus of other elements

With a digital intermediate pipeline, it’s possible to take this idea even further. In addition to stretching entire images, it’s possible to stretch parts of the image. For example, a region within the image can be selected, and its contents stretched outward. Rather than replacing the surrounding image information, the rest of the image is squashed as required to accommodate the change, a process known as “warping.” Although this sounds like something of a gimmick, it has many practical uses. For example, it can correct lens distortions, which stretch and squash specific places in the recorded image. Warping distorted images can reduce these effects dramatically. Because the warping process uses interpolation algorithms, warping is a destructive process, and the more warping is used on an image, the more degraded the image becomes. The use of warping should be kept to a minimum.

Glamor Retouching

Photographic retouchers, particularly those working in the fashion industry, can spend weeks working on a single image, producing results that are effectively more perfect than reality. In addition to color grading and correcting dust, scratches, and other film damage, they routinely manipulate the subject of the image—for example, to remove stray hairs or skin blemishes, or even making more extreme adjustments, altering body shapes or facial features. This process adds a whole new dimension to fashion photography, effectively allowing for additional makeup or plastic surgery to be applied conveniently (and safely) after the shoot has been completed.

Although the digital intermediate pipeline shares many of the tools used by the photographic retoucher, much progress still must be made before such cosmetic enhancements can be applied to entire films. Probably the most important factor is the time it requires to perform such operations. In addition to the time required to apply such extensive processes to each individual frame, time must also be spent making sure that no motion artifacts are produced as a result. For example, simply removing a stray hair from in front of an actor’s face on each frame of a sequence requires exactly the same methodology as removing a running scratch from a digital film sequence.

For now at least, certain shots, such as those without a lot of movement, might be viable candidates for such glamor-retouching processes, where the process parameters can be designed on one frame and then simply replicated across the sequence. Presumably the number of available options will grow in the future, as new technologies and hardware advances emerge.

images

Figure 9–13   Images can be selectively warped either to correct problems or to create a special effect

9.2.8 Texture Reduction and Addition

When you watch a film sequence, minute variations in the image structure are caused by the random distribution of grain across the frame. A “normal” amount of film grain isn’t considered to be distracting to the viewer and, in fact, can even enhance the image. (Many cinematographers choose film stock on the basis of the type of grain it produces, thereby using grain as a creative tool.)

The same is true of video footage, which suffers from noise distributed randomly across the image. A small amount of noise is considered an actual improvement to an image sequence, making it appear perceptually sharper and more detailed than an equivalent, noiseless sequence.

Noise and grain (or other “image texture”) should be considered a form of degradation, at least from a quality point of view. Image texture obscures or interferes with fine or subtle details, and the image is degraded as the noise or grain level is increased.14 At some point in every sequence, noise or grain is more of a distraction than is pleasing to the eye.

Digital methods can be used to add or reduce the amount of noise and grain in an image, but it’s more likely that the levels will be reduced rather than increased to repair images (although they’re often increased as a creative device). By its nature, grain addition and removal is more complicated than noise addition and removal, mainly because grain tends to be of a particular shape, whereas noise is typically distributed among specific pixels. However, certain noise reduction techniques can be used to remove grain patterns as well, usually by increasing the parameters.

The primary technique for reducing noise in an image sequence is to use a “median” filter. This simple process analyzes, for each pixel, the surrounding area of pixels, finds the median value (i.e., an average based upon how much a specific value recurs), and then sets the central pixel to that value. In doing so, the impact of noise within the image is reduced (because the small, random fluctuations are eliminated), but the process also destroys image information, particularly for fine or subtle details. This technique has other variations—for example, filters that process the chromatic information separately from the luminosity information, various proprietary filters, and targeting specific types of texture, such as the grain of a particular film stock.

Any type of texture-reduction process has trade-offs between the amount of texture removed and the resultant image quality. Each scene must be judged separately. In addition, because so many processes are affected by noise (while others may introduce noise), it becomes important to decide at which stage in the pipeline to implement noise reduction, with some digital intermediate facilities performing noise reduction early on, while others do it as the final stage prior to output.

A branch of processes are designed to add texture to images. These processes work best when the source image contains minimal texture and are usually used to add consistency between scenes—for example, increasing the level of film grain of one shot to match the level in the previous shot.

More creative effects are possible. Texture can be added to simulate a variety of materials. Film grain can be added to video footage to make it appear as though the footage originated on film. It’s even possible, for example, to texture images to look like paint on canvas—though these types of texture operations usually have to be used in conjunction with color-grading operations and possibly digital filters to complete the effect.

Texture Recovery

Almost all of the operations available in a digital intermediate pipeline can be applied to entire images or selected areas. However, many of these affect an image’s texture properties, either reducing elements such as noise and grain, or else exacerbating them. This can become a continuity issue when applied to entire images (i.e., successive shots may have different levels of texture than other shots), which can distract the audience. Where the texture differs in images that have undergone selective processing, such as selective blurring, the changed areas may stand out from the rest of the image when viewed at speed. This problem can be compensated for by using texture recovery processes, which can rebuild the texture in images to match other areas. One such method is to apply grain or noise to the selected areas, adjusting the parameters until visual continuity has been achieved. Several commercial grain-emulation filters are designed to generate the correct grain structure when the operator chooses the particular film.

Another option is to simply remove all texture from footage upon acquisition and then add back the desired level and type of texture prior to final output. This method may also make processes such as scratch removal easier, but it requires more processing time. A quality implication must also be considered because all texture removal operations degrade the image to some degree. A good compromise, therefore, is to limit this procedure to shots that require some form of processing that affect the texture of the image and then add texture to match the unmodified images.

9.2.9 Deinterlacing

The majority of video footage is interlaced (as opposed to footage that is progressively scanned or is “full frame”)—that is, each frame is divided into two fields, with each field occupying an alternate line.

Each field is typically recorded at a slightly different time from the other, meaning that a delay exists between fields. This means that the scene may have changed between fields.15 Thus, each frame might have motion artifacts when viewed individually or on a progressive (noninterlaced) format (e.g., when projected on film). The artifacts are caused by the delay and are particularly likely to occur on fastmoving subjects. Provided the final output is also an interlaced format, the interlacing artifacts may not present any significant problems, and the footage can be left as it is. However, when progressive output formats or spatial operations (such as resizing or rotating the images) are required, the interlacing becomes part of the image, and the artifacts remain in the picture.

Fortunately, interlacing happens in a predictable way and can be corrected (or deinterlaced) digitally, using a number of methods. The simplest method is to discard one of the fields and regenerate the missing lines by interpolating from the remaining field. However, this approach dramatically lowers the image quality and can result in artifacts, such as smeared lines on the image.

A more advanced method is to use dedicated motion analysis software, such as RE:Vision Effect’s Fieldskit (www.revisionfx.com), which calculates the motion of objects in a scene and uses that information to rebuild the image information. Methods such as these are slower but produce more accurate and higher-quality results.

For deinterlacing to work correctly, it must be applied before any other process, which might alter the position or size of the fields. No deinterlacing method is perfect, and the ideal solution is always to shoot using a progressive format wherever possible.

images

Figure 9–14   Motion analysis and image interpolation can be used together to deinterlace images while still retaining much of the original quality

9.2.10 Motion Stabilization

Audiences like smooth camera movement, particularly when the footage is blown up to cinema screen size. Unsteady camera motion, such as handheld camcorder footage can cause a cinema audience to suffer disorientation, vertigo, and even nausea in some cases. Most film and video productions usually ensure that all camera movements are as smooth as possible. Even handheld shots are typically made using special rigs (such as “Steadicam” rigs) to produce smoother motion.

Unfortunately, not all footage is as smooth as possible. Sometimes the image contains little bumps or vibrations that aren’t noticed until after the end of shooting. In addition, lower-budget productions may not have access to the required equipment for smooth shots.

“Motion stabilization” (or “image stabilization”) is a process where the motion of a sequence is analyzed and then smoothed or “stabilized.” It typically works by the operator selecting a feature to be tracked (similar to the feature-tracking process covered in the previous chapter), and the motion of the feature is adjusted to the required level of smoothness. The downside to this process is that the edges of the image must be cropped, depending upon how much extraneous motion is removed. This isn’t a problem if the active picture area isn’t cropped—for example, when a larger picture area was shot than is actually used (as with Academy aperture 35mm film), and the cropped area doesn’t intrude on this area. Otherwise, the resultant image may have to be resized to ensure the active area isn’t cropped (which reduces the image quality), or the cropped region may have to be reconstructed, such as by using digital paint techniques. It also can’t compensate for the “motion blur” smearing effect produced by a fast moving camera.

Similar to this is the concept of “reracking” a shot. Sometimes, misaligned scans or splices cause film footage to lose its vertical positioning. Visually, part of one film frame carries across to the next digital image (a similar effect is also seen sometimes when video signals lose their tracking information). Some digital restoration systems provide means to rerack the footage interactively, effectively resplicing images together. Often though, additional restoration processes are needed to correct other problems that are a side effect of misaligned framing (such as color differences, warping, or destabilized images).

9.2.11 Temporal Interpolation

One of the unique and most powerful tools within the digital intermediate pipeline is the ability to analyze and interpolate the motion of objects in an image sequence. Motion (or “temporal”) interpolation can be used to increase or decrease the amount of motion blur within a scene (effectively emulating adjusting the shutter-speed control of the camera, even after shooting), regenerate an existing frame from others (such as for replacing damaged frames) and generate new frames for a sequence at a different frame rate (for creating slow-motion, fast-motion, or variable speed motion effects).

The mechanics of each motion-interpolation system varies by design, but in general they work by analyzing every pixel in each image (or groups of pixels), and comparing them to the pixels in other frames in the sequence, constructing a “map” of vectors of the motion of each pixel in the sequence. From this map, it’s possible to estimate the position of each pixel at a sub-frame level (in-between frames). By blending regions of fast motion together, it’s possible to simulate more motion blur (the blur effect produced when a fast-moving object moves in front of a camera with a slow shutter speed—the length of the blur on each indicating how far the object travelled while the shutter was open). It’s also possible to reverse this process to some degree, effectively reducing the amount of motion blur, but this ability depends entirely on the image content, and in general, it’s much easier to add motion blur than to remove.

Motion interpolation is accurate to a point. As a rule of thumb, the higher the frame rate relative to the speed (in pixels per second) of each moving object in the scene, the more accurate the results will be. The best results are produced by shooting at a high frame rate. Problems are also caused by, for example, lighting changes or objects that cross over each other, and the results may have to be adjusted with other processes—for example, to remove artifacts.

As with all forms of interpolation, motion-interpolated images are of lower quality than the originals. When using motion interpolation to generate frames to replace areas of other frames, it’s sometimes worthwhile to generate the new frame and then replace only the required sections rather than the entire frame for better results.

9.3 Summary

As we’ve seen, a lot of different tools are available to the digital intermediate pipeline. When utilized correctly, these tools can improve the perceptual quality of the image. However, the use of many tools requires a great deal of rendering time, and the opposite effect can be produced than intended, with some restoration processes actually reducing image quality. As stated, the dust-busting process of a digitized feature film is one of the most time-consuming and expensive parts of the process, yet in many ways, it’s still a considerably faster process than attempting to use, for example, digital sharpening and blurring techniques on the entire film. Also, some scenes might need complex retouching procedures, requiring a visual effects artist rather than a digital intermediate restoration operator.

The next chapter looks at the use of digital processes, as well as other computer-generated material (such as text), that can be applied to images to modify them for creative effects.

1 Some optical problems, such as lens flare, may actually enhance the aesthetics of an image and are sometimes even artificially added to images.

2 The same problem can be caused by mechanical failure during shooting or projection; even when the film is undamaged, if the projector or film camera can’t accurately line up the film for each frame, the same problem may be visible. When the error is caused by a faulty camera, the problem is permanently recorded on the film, whereas when the projector is the cause, changing (or repairing) the projector solves the problem. The same problem may also be caused in digitally scanned material if the scanner doesn’t use pin registration during scanning (i.e., locking each frame into place before it’s scanned).

3 This can also be an optical problem that occurs when lower-quality lenses are used—such lenses refract different light wavelengths by slightly different amounts.

4 This isn’t entirely true because continued exposure to light fades or bleaches developed film, but the effect takes a long time to build up and is analogous to leaving a magazine in the sun for a long time.

5 Many video camera problems are caused by built-in features that overcompensate for certain situations. For instance, many cameras produce oversharpened or oversaturated images by default, to compensate for the consumer, assuming that he or she usually won’t focus perfectly or light a scene before shooting it. In most cases, these problems can be avoided by disabling the respective controls on the camera.

6 Interestingly, it’s sometimes quicker to just use another restoration method, such as digital paint, to correct render errors than to re-render the sequence.

7 But only the originals are resampled at a higher bit depth—simply converting an8-bit image to a 10-bit image provides little benefit.

8 Doing so drastically increases the costs of associated lighting and film stock, however.

9 Note also that most QC checks are performed on HD material, even for film projects.

10 Many interframe compression techniques, such as MPEG, exploit this fact to dramatically reduce the amount of data stored for an image sequence.

11 Ironically, this process can often take as long as manually painting the dust!

12 Although digital images can be rotated in increments of exactly 90 degrees with no loss in quality.

13 Certain computer-generated 3D image formats have the capability to record depth information, which can be used to accurately simulate various focusing techniques.

14 It’s not really accurate to talk in terms of “increasing” or “decreasing” grain in an image, because the differences are due to the relative type and size of the film grains (and hence their frequency), which are in turn determined by the film stock and processing methods. The terms are used here for simplicity.

15 It’s entirely possible to have noninterlaced video material on an interlaced format and for it not to be interlaced. For example, one second of a PAL 50i tape might contain 25 progressive frames split into 50 fields. These fields can be used to perfectly recreate a 25p sequence without artifacts because the problem arises only when the scene is shot using an interlaced method. Throughout this book, it’s assumed, for the sake of simplicity, that progressive material is stored on a progressive format.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset