12

Quality Control

The previous chapter described different methods for outputting a finished production to a variety of media. In many cases, the production then undergoes a technical quality control (or QC) check to ensure that it meets the required technical standard for broadcast or theatrical release. The QC process is necessary because of the many links in the production chain. Material is supplied from multiple sources and run through many separate processes, and a number of different companies may have handled it. The QC process is essential in ensuring that an acceptable level of quality is maintained, regardless of the processes used until this point.

During the QC process, any number of problems may be spotted, ranging from barely noticeable minor issues to severe ones that threaten to make the program unwatchable. The problems may be technical, meaning a problem with the media (such as a faulty video signal); they may be visual, meaning a problem with the image content (such as an out-of-focus shot); or the problems may be editorial (such as a shot starting at the wrong timecode). QCs are usually performed at an external location, to prevent bias, and a QC report is generated (either by human inspection and testing, or by an automated system), with issues typically graded on a five-point scale that indicates the problem’s severity, with a grade 1 problem considered severe and a grade 5 problem considered imperceptible.1

QC requirements vary depending upon the output medium—for example, aliasing may not present significant problems for digital release (particularly when the source master’s resolution is higher than the final digital resolution) but will be noticeable on nondigital formats, or on digital formats with a higher resolution than the source master. Different distributors may also have specific requirements, such as designating particular timecodes for different elements on video masters. For example, the British Broadcasting Corporation (BBC) requires that program content start on 10:00:00:00 and be followed by a minute of black (among other requirements). In addition, various distributors may differ in their interpretation of the severity of a particular issue. Local news channels, for example, might have, out of necessity, a much lower standard for an acceptable level of quality than other channels.

Video (both analog and digital video) formats have the most welldefined technical QC specifications, whereas film distributions focus more on visual quality and digital QC specifications range from the entirely subjective to nonexistent, depending upon the specific output and distributor.2

12.1 Technical Video Considerations

Technical problems that are flagged in a video material’s QC report may be the most difficult to decipher but are usually the simplest to fix. Many problems are caused by a fault somewhere in the output path—such as with the VCR used to record the footage on tape or perhaps an incorrect configuration of a parameter—and they are usually remedied by re-outputting the material. The facility originating the material can detect many problems before resorting to the external QC, particularly when the facility is equipped with correct monitoring equipment, such as vectorscopes, waveform monitors, and graphics cages.

12.1.1 Video Standard

PAL, NTSC, and SECAM are different video systems with individual specifications. Therefore, each video system is incompatible—for example, you can’t play back a NTSC video in a PAL VCR.

Using the wrong video system isn’t strictly a flaw, but it has more to do with the distributor’s delivery requirements.3 Video standards issues can be solved by simply outputting the material once again but at the correct standard, which may require the use of a different VCR, or a changed setting somewhere in the output path (although a QC still must be carried out on the new tape). Be warned though, different video systems also differ in other properties, particularly the frame rate and image area, and even the color space. Many digital intermediate pipelines transparently handle different output formats and can be re-output without any problems. In other situations, however, it may be necessary to re-edit, reframe, or regrade the production for different video standards. Otherwise, a faster option is to run the video through a “standards converter,” recording the output to a new tape using the correct standard, although the results may be inferior in quality to re-outputting directly from the digital source.

12.1.2 Illegal Colors

While the use of illegal colors probably won’t lead to an arrest, it may prevent the show from being broadcast. Different video standards have their inherent color spaces, and colors created that fall outside of that color space (such as those that might be created in an sRGB color space), are termed “illegal,” meaning that they won’t display properly when viewed on standard monitors. Many digital output systems automatically curb the output of illegal colors to specific video systems, but for those systems that don’t, it may be possible to apply a “video-safe-color” digital filter to the final material that will render the footage without any illegal colors. Otherwise, it may be necessary to regrade the sections with illegal colors and then re-output them.

Although video allows for headroom in terms of luminance ranges, specific black points (the “black level”) and white points (the “video level”) should be adhered to when outputting to specific video formats. The minimum pixel value (i.e., RGB 0,0,0) of a sequence should be output at the video’s black point, and the maximum pixel should be output at the video’s white point. Waveform monitors are typically used to detect regions where this may not be the case, by superimposing the luminance signal over a calibrated graph.

Scenes that don’t meet the required luminance ranges may have to be regraded, or re-output onto video. Since many VCRs have controls for correcting the black level and video level during a recording, it’s usually possible to adjust the levels for each recording rather than having to regrade entire sequences, especially if the graded version is correct in terms of luminance ranges. Likewise, if an entire video tape has consistently wrong luminance ranges, it may be possible to dub the tape onto a new one using the VCR’s controls, saving time and expense.

images

Figure 12–1   A waveform monitor can be used to inspect a video signal’s properties

Similarly, problems may arise with the chromaticity (or “chroma,”—a video signal’s color component), where colors may be recorded incorrectly—for example, where blues display as greens. Incorrectly recorded colors can be harder to measure objectively except when checking a known reference (e.g., bars or another standard reference image) using a vectorscope. Otherwise, an experienced operator may see problems subjectively (e.g., with flesh tones) on a calibrated monitor. In general, properly color-graded productions should encounter chrominance problems only due to some fault on the output path, with regrading necessary only when the original grading was flawed. Again, the quickest option for correcting chromaticity issues on entire videos is to make use of the “chroma level” controls on the output VTR.

12.1.3 Aspect Ratio

The “aspect ratio” of a production denotes the shape of the picture area—quite simply, the ratio of the width of the picture to its height. Various formats have strictly defined aspect ratios—for example, most standard-definition videos (as well as “full aperture” 35mm film) have an aspect ratio of 4:3 (also referred to as “fullscreen”), whereas high-definition videos normally have a 16:9 aspect ratio (also referred to as “widescreen”). Other ratios may be used, depending upon the distributor’s requirements. For example, some broadcasters transmit a 14:9 aspect ratio image, which is cropped and letterboxed to fit a 4:3 screen.4

The aspect ratio is usually defined at the start of the production, and the output matches the production. Sometimes the picture may have to be reformatted for a different aspect ratio, which can be done by cropping and resizing the images (a process called “center cut-out,”—cropping the sides of a 16:9 image to get a 4:3 image) or by using the “pan and scan” techniques discussed in Chapter 11. The latter techniques require more interactive control. Alternatively, it’s possible to quickly reformat a video signal by passing it through an aspect ratio converter and recording it onto a new video tape. Simply resizing the picture area to fit a different aspect ratio results in a distorted picture, with shortened or fattened, or taller or thinner, actors in a scene.

12.1.4 Safe Areas

Consumer-grade televisions don’t display the entire active picture area. Instead, they crop the edges of the picture by some amount—either electronically cropped or cropped by the casing of the set (which is also true, to some degree, for film projection, where small differences in the size of the film gate may crop the edges of the picture).5 For this reason, the content at the edge of a frame is usually not visible, which means that it’s vital not to put specific details close to the frame’s edge, including such details as action elements (events that occur in a given shot) and graphics elements (such as titles, logos, or “bugs”).

To ensure that the vast majority of viewers can see the relevant content, each format has established “safe” areas for both action and graphics. It’s generally accepted that any action appearing in the action-safe zone can be seen by the audience, and any graphics and text within the graphics-safe (or “title-safe”) region will be readable on most television sets. The action-safe area represents the inner 81% of the total image area (or 95% of the width), while the graphics safe area represents the inner 64% of the picture area (or 90% of the width) for 4:3 formats. For 16:9 formats, the definition becomes a little complicated; refer to the Appendix for more information. Safe areas can be measured using special character generators (or “cages”) that overlay the boundaries over the video image. In addition, a “framing chart” image can be placed at the front of a video playout to help diagnose incorrect framing.

images

Figure 12–2   An example line-up chart, showing the safe areas

The reverse of this is also true: on every shot, pictures must extend beyond the safe areas to the frame edge to avoid unnecessary cropping of displays that can show the entire picture area.6 When these boundaries are breached, especially in the case of text, it may be necessary to return to the original elements (i.e., the footage and text layers) and reposition and recompose them to fit the safe area.

12.1.5 Frame Rate

The frame rate is almost exclusively determined by the output standard. For film, this means a frame rate of 24 or 25fps. For NTSC video, it’s 29.97fps, and for PAL video it’s 25fps. Likewise, each high-definition format has a specific frame rate. For example, 1080p24 is 24fps (progressive frames) and 1080i50 is 25fps (interlaced, 50 fields per second). A comprehensive list of the frame rates for the most common formats can be found in the Appendix.

When a frame rate must be changed, it can be done digitally, using any of the temporal interpolation processes outlined in Chapter 10 or by passing the signal through a standards converter.

12.1.6 Fields

Many video formats (i.e., those that are interlaced) have two fields for every frame of picture. Because each field represents half the frame, one field carries picture content for lines 1, 3, 5, and so on (and is referred to as the “upper” field), while the other field carries picture content for lines 2, 4, 6, and so on (and is referred to as the “lower” field). Unlike progressive formats, each field in interlaced video represents a different moment in time, having been photographed separately. If the first field (i.e., the “dominant” field) photographed is output after the second field (but within the same frame), movement in playback can look erratic (although no problem is apparent when looking at a single frame of the sequence). This can happen, for instance, when shooting onto a format (e.g., PAL Betacam) that stores the upper fields first and then recording out onto a format (such as PAL DV) that stores the lower field first. To avoid this, either the frames can be deinterlaced completely during the digital intermediate process (see Chapter 9), making them progressive, or the field dominance can be reversed by removing the first field, either digitally or through a standards converter (although it can result in cut points occurring on a field rather than a frame). A second option is to reposition each frame vertically by one line, although doing so removes one line of the image on one side and creates a blank line on the other (which is usually replaced by duplicating the adjacent line).

Similarly, some errors can occur that are caused by fields appearing in the wrong place in a frame—that is, a lower field becomes an upper field, and vice versa. In this instance, playback looks erratic, whether or not it contains movement. Solving this problem requires fields be swapped, though it may then necessitate the need to reverse the field dominance, too.

Progressive material may require that footage be supplied an interlaced format. Normally this doesn’t require reformatting the footage and doesn’t have any associated quality loss. For example, a project mastered at 30fps with progressive frames may be output to a 1080i60 (high-definition interlaced video format at 60 fields per second), and each frame divided in half so that alternate lines are stored in each field. Thus each frame remains identical to the source material. In the (somewhat rare) event when progressive footage must be encoded so that each field represents a separate instance in time, it may be achieved by using a digital “reinterlacing” process on the footage (as described in Chapter 11), although it will likely result in some image degradation.

12.1.7 Timecodes

In most cases, the video timecode should run continuously without breaks. However, recording errors or incorrect settings on the output system or VCR can cause timecode problems, such as jumps in the timecode (e.g., when the timecode suddenly goes from 01:00:00:00 to 12:54:17:05), which may not be visible in the picture content. Fortunately, most timecode problems can be fixed by replaying the material onto tape (or a new tape if you suspect that the fault is in the original), dubbing the tape to a new one or even just re-recording the timecode track, leaving the picture and audio untouched.

Many timecode problems are caused by lengthy or multiple recordings; therefore, most facilities make it a common practice to record to a “prestriped” tape (one with a black video signal and timecode track already recorded onto the tape), simply inserting the video and audio content without modifying the tape’s timecode.

12.1.8 Signal Faults

Any number of technical signal faults can occur during a video recording (or indeed, during playback). Problems such as line blanking, control track damage, flicker, and dropout are all related to the mechanics of the VCR and video tape and may be remedied usually by outputting a new tape, possibly using a different VCR. Where this isn’t possible, it may be possible to repair the damage digitally by digitizing the faulty tape and using restoration techniques before outputting the corrected material). For example, video dropout can be treated in a similar way to dust and scratches on scanned film.

Recording Logs

Many guidelines also require all supplied video material to be accompanied by a “recording log.” This log is simply a listing of the contents of the tape, usually including the following elements for each recorded item:

  1. The timecode on the tape of the beginning of the item.

  2. The timecode on the tape of the end of the item.

  3. The title or description of the item.

  4. The recording date.

It may also be useful to note other pertinent data for reference—such information as the output system or VCR and the name of the operator.

It’s also good practice to record a “slate” (i.e., a still frame with shot information) before each item and to label each tape with details such as the production name, as well as the type of output (such as, the “master” or “dub”), and to set the tape permissions to write-protected to prevent accidentally erasing the contents. Some digital intermediate facilities may also use barcode or RFID (radio frequency identification) tags to aid tracking.

Flashing and Repetitive Patterns

People with photosensitive epilepsy (or PSE) are vulnerable to strong light pulses of high frequencies, which can trigger seizures. Under certain conditions, these seizures can be triggered by watching video content that contains bright, flashing images or fast-moving patterns. Individuals with PSE can also be affected by fast cuts, bright spinning spiral patterns, strobe lighting, or other fast-moving, high contrast content, in particular with a high red color component. Each distributor offers its own interpretation of exactly what can and cannot be displayed safely, in terms of the maximum frequency (e.g., the BBC allow pulses at intervals of 360ms or longer), but in general, any flashing sequence shouldn’t last longer than five seconds. In certain circumstances, a warning may be placed at the start of a program that contains content that might be unsafe in this regard.

12.2 Film Quality Control

In the usual production pipeline, once the digital intermediate has been recorded to film negative and developed, an “answer print” is made, which may or may not include sound. The answer print is suitable for previewing the “release prints” (i.e., the prints distributed to cinemas).

Producing film releases involves less technical issues than producing video, in part because far fewer variables are involved in film processing than with video recording. Film-processing laboratories work to achieve a specific, standardized level of quality control and perform the necessary checks on developed material themselves, such as ensuring the proper development of each reel and checking that the sprockets are in the right place. This relieves much of the production and post-production teams’ burdens, and they instead can focus on more subjective issues.

As with video formats, the aspect ratio is important, but different film formats have explicitly defined dimensions for the image area for 35mm (a list of them can be found in Appendix), which must be adhered to. The film should be visually inspected for damage, such as scratches, and to determine whether it’s free from splices whenever possible. Film can be regraded optically, if necessary, to make overall changes to the prints’ color component.

Labs provide “timing reports” to accompany each answer print. The timing reports contain details of the “printer light” settings used to generate the print. Every print is made from a negative using a combination of red, green, and blue light, and the amount of light can indicate potential problems. A perfectly exposed negative, in a perfectly balanced chemical process, would be exposed with printer lights of 20-20-20. Because of differences with different processing runs, most digital intermediate facilities also supply a standard test pattern as part of any filmed-out material, which is used to matchgrade the processed film and ensure the correct result.

12.3 Digital Errors

Although a digital pipeline offers the filmmaker many wonderful creative possibilities, a multitude of issues can arise within the digital intermediate environment. Digital problems are fairly rare, but given the sheer volume of data and individual files being processed on a given production, the likelihood of a problem affecting some of the data increases. Many of these problems can be detected visually, while some can be detected by automated processes.

12.3.1 Render Errors

For various reasons, renders of digital images can sometimes fail, resulting in a corrupt file, signs of corruption within an image (typically seen as randomly colored pixels in uncompressed images), images with no content, or images with missing or faulty compositing (e.g., in images with text composited over a background, the text may disappear for no apparent reason part way through a shot). Anything that compromises the actual file structure of the images (namely, file corruption) may be detected fairly easily using a number of automated file processes. However, these problems may also be introduced simply by transferring the files, such as across a network, and can be resolved by recopying the file in question from a good copy (or re-rendering when no such copy is available).

All other problems may be detected only by visual inspecting each frame, which can be a tedious process. For this reason, the majority of digital errors (particularly the subtle ones) are detected during the video tape QC (assuming one is performed), or ideally, during a digital QC prior to output. Render errors may always be solved by simply re-rendering the shot in question. Note that the entire shot (cut-to-cut) should be re-rendered to avoid introducing flash frames.

12.3.2 Flash Frames

A “flash frame” (or “timing shift”) is a generic term that can be applied to any sudden change in the content of a shot. It might be a wrong frame inserted into the middlze of a sequence, or an abrupt stop of a digital effect, such as when a particular grade applied to a scene accidentally carries over into the first few frames of the next scene.

The causes of flash frames may be editorial, such as when a cut-point is set incorrectly on one of the systems or when data is accessed from the wrong shot, or a flash frame may be the fault of the rendering system or output path. For the latter, the problem can be resolved by simply re-rendering or re-outputting the scene, otherwise, the problem must be corrected, such as by adjusting the shot list on the relevant systems.

Similar problems can arise with duplicate (repeated) or dropped (missing) frames appearing on the output, which can occur through rendering, output, or editorial problems, necessitating re-rendering, re-outputting, or reconforming, respectively. If the original material is damaged or unavailable, the restoration techniques outlined in Chapter 9 may be used to rebuild frames.

images

Figure 12–3   In this sequence, the wrong grading parameters were applied in the last frame. When played real time, the sequence appears to flash at the end

12.4 Digital Artifacts

By definition, many digital processes are destructive to the image. Digital grading may stretch the colors too far for a particular output, resulting in “posterization,” or “clipped,” or “crushed” colors, while resizing images may leave undesirable “aliasing” artifacts (both issues are covered later in this chapter). Each different digital process carries the risk of introducing more artifacts, whether it’s oversharpening the image (which causes “ringing”) or using lossy compressing (which carries a whole host of potential artifacts), and so the entire digital intermediate process must strike a delicate balance between minimizing the visual impact of such artifacts and enabling such processes to improve the image’s subjective quality and creative content. Clearly, maximizing the image quality throughout the pipeline is one of the best ways to do so, but there is always going to be a limit to the effectiveness of such preventative measures.

Anytime artifacts are found, they can usually be traced back to a particular problem, and a decision can be made as to the method you want to use to reduce the impact. For example, when trying to correctly balance a scene’s luminance, you might introduce a high level of noise into the shot by using digital grading on underexposed footage. If too much noise is present, a decision should be made either to reduce the amount of grading applied to the original shot (which may impair the scene’s subjective quality) or to use digital noisereduction techniques (which may cause the scene to lose sharpness). The Appendix lists the most common destructive digital operations and the types of artifacts they can introduce.

12.4.1 Spatial Artifacts

Spatial artifacts are caused by the “space” of an image—the shape, size, and distribution of pixels. The most common artifact of this type is “aliasing,” which is a jagged appearance of an image’s curved or diagonal edges. Because pixels are just squares, it’s easy to accurately draw horizontal or vertical straight lines just by stacking them next to each other, but when you try to represent a curved or diagonal line or edge with too few pixels, the corners of the pixels stick out, making the curve appear jagged. One way to reduce this effect is to redigitize the image at a higher resolution, so that the edge can be represented with more pixels.

images

Figure 12–4   Aliasing can be seen along diagonal or curved lines

images

Figure 12–5   With a higher-resolution image, the effects of aliasing are less pronounced

Redigitizing the image isn’t always an option, particularly by the time you get to the QC stage. Many practical or economic reasons may account for the necessity of working with lower-resolution images in the first place. However, aliasing can be reduced in part by employing “anti-aliasing” mechanisms. These work by averaging regions with edges, altering the aliased pixels to give the impression of a smoother line, but possibly resulting in a loss of sharpness. Many interpolation methods use similar techniques (as discussed in Chapter 10), but for the purpose of fixing QC problems, reducing the impact of aliasing may be possible by using a combination of blurring and sharpening effects, or even by using sophisticated interpolation methods to artificially increase the resolution.

Another effect prevalent in digital images, particularly those originating from CCD-based devices, is “blooming,” where a point of light “spills” into an adjacent pixel, which can blur parts of the image. Blooming typically occurs when shooting a brightly lit source. Blooming problems are typically resolved by “edge enhancement” or “sharpening” techniques (see Chapter 9), which increase the contrast along detected edges.

12.4.2 Chromatic Artifacts

When an image contains too few colors to fully represent the subject (as when an image’s bit depth is too low), chromatic artifacts can occur. You can see this problem primarily in regions of gradual color change—for instance, a shot of a sunset. With insufficient color information, regions of graduated color appear to have “steps” where one color abruptly changes into another.

images

Figure 12–6   If all else fails, it may be possible to reduce the effects of aliasing

This stepping is caused when the color difference between two adjacent shades of color (or channel values) is too far apart. The most common cause of banding is from stretching colors too much during the digital-grading process. Working at as high a bit depth as possible can prevent the occurrence of banding; otherwise, it may be necessary to regrade the images. Perhaps using a combination of blurring, sharpening, and noise-addition techniques can correct such a problem, but in most cases, it only degrades the image further.

12.4.3 Noise

“Noise” in an image refers to random errors in the accuracy of the information in a digital image. The errors aren’t strictly a product of digital images, but are common because they’re inherent in most imaging methods (except CG imagery). For instance, in shooting a perfectly gray surface, you would expect an image in which every pixel is the same shade of gray. In reality, however, some pixels might be slightly too dark; others too light. These errors likely are distributed fairly randomly across the image and are caused by errors within the capture device itself. Some digital image operations can add or increase the level of noise in an image, such as sharpening or color grading.

images

Figure 12–7   In this image, banding produces discrete delineations between areas of different tones

Several methods are available for reducing the noise in an image. The best way is to capture the same image several times and then create an “average” of all the images. This can be time consuming (or otherwise impractical), however, and doesn’t account for less random and more localized noise effects (e.g., caused by a faulty CCD element). Another method of reducing the noise is to use a noise-reduction algorithm, which works by performing a statistical analysis of the image and calculating probable areas of noise. Some common noisereduction techniques are covered in Chapter 9.

Noise can be insignificant in still images, where a highly detailed scene might contain a small amount of noise that is undetectable when viewed. However, the noise can become a larger problem when present in a sequence of frames and is visible as tiny fluctuations in color.

12.4.4 Temporal Artifacts

Just as with color and space, motion is segmented into evenly spaced discrete units (i.e., frames). Digital moving pictures typically work the same as other moving-picture formats—that is, a running sequence of still images. The major problem that can arise is when this rate of display (i.e., the “frame rate”) is too low, the viewer becomes aware of the individual images, and the illusion of motion is lost. The sequence is said to “strobe.” This strobing effect isn’t unique to digital media, and in fact, the film and television industries’ own solutions are equally applicable here. The human eye is unable to detect most changes that are faster than one-fifteenth of a second in duration (peripheral vision can detect changes in motion faster than the center of vision). Therefore, cinema and television standards dictate that when the frame rate is higher than one-fifteenth of a second, the strobing effects won’t be visible. In fact, cinema standards assume a frame rate of 24–25fps, while broadcasters typically use a rate of 25–30fps (depending on the regional standard). Digital media can be played back at any frame rate, but for the sake of simplicity (and to avoid affecting the audio sync), the same frame rate is typically used as the output format. It’s also a logical assumption that the frame rate for playback is the same as the rate that the imagery was acquired; otherwise, the motion may look unnaturally fast or slow (unless that effect is desired, of course).

images

Figure 12–8   A fast-moving object exhibits a lot of motion blur with a slow shutter speed

“Motion blur” is an effect caused by an object’s movement during the length of a single frame. In an extreme example, a ball being photographed might move across the whole screen in the space of one frame, causing it to be imaged along the entire path, and resulting in a long “smear” or blurred image of the ball.

Motion blur is controlled, to some degree, by the length of time the imaging device records a frame. Most imaging devices have moving parts and, therefore, can’t record continuously. For example, video cameras have a shutter that closes (i.e., blocking all light out of a scene) when advancing from one frame to the next one. (Film cameras run the shutter at a fixed speed, instead varying its angle, but the principle is the same.). Thus, any movement that occurs during the brief period that the shutter is closed won’t be recorded. Conversely, all motion that occurs when the shutter is open is recorded onto a single frame. The two ways of reducing the amount of motion blur are either to increase the frame rate (which isn’t always practical because it could cause playback or compatibility issues later on) or decrease the shutter speed (or shutter angle) of the imaging device.

Interestingly, although motion blur can be considered an artifact, many people prefer watching footage containing some amount of motion blur, and it’s more of a creative issue than a quality one. For this reason, the degree of motion blur in a scene is normally controlled at the time of shooting, although it’s much easier to add simulated motion blur than to remove motion blur than has been recorded in-camera later on.7

Many other motion-related artifacts can occur. All of them are caused by changes to some aspect of the images between frames (e.g., mis-registrationor color flashes), but all motion-blur effects are a product of the equipment or methods used during shooting and aren’t a product of digital media.

images

Figure 12–9   Less motion blur is seen with a faster shutter speed

12.5 Visual Considerations

Maintaining high-quality footage isn’t just about preventing processing defects (whether digital or otherwise). In addition, subjective factors affect the visual quality of an image.

12.5.1 Sharpness

Audiences respond well to sharper images with visibly strong edges, and to important features (particularly human faces) within the focal plane. The sharpest images can be obtained in the digital intermediate environment by acquiring source material at the highest possible resolution (usually the source format’s native resolution), and avoiding any processes that feature interpolation or smoothing algorithms, before outputting at the same resolution as the source was captured.

In reality, it’s difficult to avoid using processing that doesn’t affect an image’s sharpness in some way. However, in most cases, the overall loss of sharpness may be negligible when compared to the benefits achieved, as when reducing the level of noise on a particular shot. By far, the most significant way to ensure sharp images is to achieve the maximum possible sharpness when shooting, which means using high-quality, carefully focused lenses.

When image sharpness is a concern during the QC process, it may be possible to use digital-sharpening techniques outlined in Chapter 9, although these techniques may present undesirable side effects.

12.5.2 Exposure

Broadly speaking, properly exposed footage has bright highlights, dark shadows, and easily discernible pertinent features.8 The colorgrading process (if adopted) of the digital intermediate pipeline should ensure that the output footage is balanced as desired, and the color balance doesn’t change much during the rest of the digital intermediate process. The ability of color grading to change the exposure of a given shot, particularly without introducing significant artifacts into the footage, is somewhat limited. The general rule is that digital grading can change the recorded exposure by about 20 printer lights (about one stop) without introducing noticeable artifacts into the process. By far the best guarantee of correct exposure is to correctly expose the material when shooting, which means using sufficient illumination in the scene, in conjunction with correct filters and camera settings (and film stock for film shoots, not to mention an appropriate development process). In cases where the source footage exposure can’t be guaranteed (e.g., archival footage), digital grading may be used to correct the exposure (possibly with a number of other processes for extreme exposure differences).

12.5.3 Color Rendition

To maximize audience comprehension (as well as audience empathy), colors in a given scene should be accurate renditions of their real-life counterparts. A tomato is much easier to recognize when it’s red than when it’s orange (it may get mistaken for an orange in the latter case). This is particularly true of flesh tones, as people can differentiate between realistic flesh tones and those that are off color.

During a shoot, color balance is controlled through careful lighting and camera settings, and with filters and recording media, but the fine details of color in the digital intermediate pipeline are entirely controlled during the color-grading stage, typically with the digital colorist working in conjunction with the cinematographer. For this reason, the best way to ensure accurate color rendition is by using an experienced cinematographer, a competent colorist, and correctly calibrated equipment.

12.5.4 Continuity

The term “continuity” covers many aspects of producing a production. It encompasses scene continuity—for example, a person sunbathing in one shot normally won’t be drenched with rain in the next cut; story continuity—actors in a production set in the Middle Ages shouldn’t be wearing wristwatches and television sets shouldn’t be part of any shot; and visual continuity—a scene of darkness and uncertainty won’t normally include a bright and sunny shot. The visual continuity can also be disrupted by factors such as varying degrees of certain digital processes, such as inserting a shot with a strong glow effect into a scene with more subtle glow effects (or none at all). Visual continuity issues can sometimes be corrected by adjusting the properties of the rogue shot, using those processes at the disposal of the production team—such processes as darkening an overly bright shot. In other cases (and this is true of other continuity issues), a visual effect or reshoot may be necessary.

12.5.5 Smooth Motion

One factor that distinguishes amateur productions (e.g., home videos) from ones of broadcast or film quality is the unsteady motion visible in the former. By contrast, professional productions, for the most part, feature smooth camera movements, even in frantic handheld scenes, through a combination of specific equipment geared for this purpose and the skill of the cameraman who operates it. Along with the difficulty in trying to watch a shaky scene, unsteady motion is largely avoided because it can cause motion sickness. In situations when the camera motion isn’t as smooth as desired, either because of the way the shot was filmed or of faulty camera equipment, you can use digital stabilization techniques (covered in Chapter 9) to smooth it out.

Source Material QCS

There’s a lot to be said for performing the same standard of QC checks on the source material acquired into the digital intermediate process, as you would on the output material. In the long run, problems, such as improperly fixed film stock, can be treated much easier (and much cheaper) if they’re caught early, rather than using digital methods when trying to correct problems later down the line. Particularly during shooting, faults intercepted with material already shot may be reshot, which isn’t usually an option by the time the material is conformed and can be viewed. In addition, if other problems are noticed early on, such as excessive dirt and scratches on a reel of film, you might be able to treat them first, saving time (and possibly money).

12.6 Editorial Issues

Editorial issues include such problems as countdown leaders starting on the wrong frame, synchronization issues, and bad cuts.

Each distributor generally has requirements for every output element—for example, a requirement for the starting timecode of the production (i.e., the “first frame of picture”) or for the location of textless elements. If such elements are in the wrong place, it may be necessary to re-output the elements.

Sync problems can occur for a variety of reasons. Perhaps the production has been conformed incorrectly, or it contains duplicated or dropped frames somewhere. An outdated EDL might have been used, or a number of other problems might be encountered. Similarly, “bad edits,” (such as a cut in the middle of a shot, or the wrong shot inserted into the middle of a sequence) may exhibit symptoms that are similar to sync problems. Sync problems are usually detected by running the audio track with the finished production (audio sync problems generally manifest as a delay between when speech is heard and when the speaker’s mouth moves). Alternatively, running a copy of the offline edit, if available, alongside the finished output version normally reveals both bad edits and sync problems, as well as the starting location of the problem and hence its cause.

Depending upon their specific cause, these problems are usually resolved by correcting the edit and re-outputting the sequence. In the event that the sequence “drifts,” becoming increasingly out of sync, then it’s likely that a mismatch in frame rate has occurred, either because of the way the material was conformed or an incorrect setting on the output path. When the production has been conformed at the wrong frame rate, it can result in a lot of work to correct it.

12.7 The Digital QC Process

It goes without saying that throughout a production, every digital image should be checked for problems. However, a thorough QC check should be made prior to final output (from the digital source master, if available). Performing the QC requires watching the material at the correct speed, as well as using slower speeds to check details. It may be possible to use automated processes to check for some problems, such as illegal colors or damaged frames. The use of analytical devices, such as waveform monitors or digital histograms, may also provide a more accurate examination during the QC process. Ideally, a digital system that can quickly fix problems should be used for checking material, without having to switch back to other systems. Systems such as Assimilate’s Scratch (www.assimilateinc.com) enable the QCer to play back and compare images, and it provides tools to correct problems.

12.7.1 Comparing Digital Images

Many systems allow the simultaneous display of multiple digital images, which is useful for making detailed comparisons. For example, it may be possible to sync the digital playback system with a video playback system, to cross-reference the edit in an offline reference video with the final output. Such systems can also be used for visual comparisons of an image’s different versions—for example, comparing a graded image to an ungraded image. To make this process easier, you can use split screens or some sort of interactive wipe process to compare images. The ability to create “difference mattes” can also be important. A difference matte is an image mathematically generated by comparing two images. For every pixel that isn’t different, the resulting pixel is black. The greater the difference, the brighter the pixel becomes. This approach can be very useful in ensuring that changes made to an image don’t significantly affect the image content.

images

Figure 12–10   Systems such as Assimilate’s Scratch provide a digital playback system suitable for the QC process

Creative Decisions

There’s a caveat to the entire discussion of visual QC issues: in many cases, certain shots were intended to look the way they do. That’s not to suggest people can’t be bothered to ensure a high level of quality, but that sometimes they are trying to create a visual style for the production that devites from convention. For example, a deliberately out-of-focus human face might intentionally prevent the audience from viewing the actor’s expression. Similarly, the color rendition of an object or person might be unrealistic in order to stylize it. The digital intermediate process provides an unprecedented degree of stylization and it gives rise to a host of creative potential. For this reason, it may be safe to ignore QC report notes that point out visual issues (not to mention, certain editorial issues, such as single-frame edits), providing those visual effects were intentionally created.

images

Figure 12–11   Although two images may look the same, a difference matte can be created to highlight the differences

12.8 Summary

A number of technical considerations must be made to ensure that material is output at an acceptable level of quality and that it will be viewed in the manner intended. Fortunately, the digital intermediate pipeline offers a number of tools for checking for potential problems and for correcting problems that are found.

With the successful completion of the QC process, the digital intermediate process comes to an end. The following two chapters look at ways the digital intermediate process might develop, starting with a look forward to new technology and developments.

1 This scale is taken from CCIR Recommendation 500.

2 This undoubtedly will change as the digital cinema industry gains more prominence.

3 Some distributors’ guidelines also require that a particular video format be used in conjunction with a video system, such as a 625 Digital Betacam or a 1080p24 HDD5.

4 Although they may still require a 16:9 deliverable.

5 The reasons are largely historical. A degree of “overscan”—that is, picture area cropped from the display—was created so that during power fluctuations, which used to shrink the picture onscreen, the resulting, shrunken picture would still fit the screen. Today, television manufacturers use overscan to compensate for productions designed around this limitation, which in turn gives rise to the need to compensate for the overscan

6 Sometimes material is output to video with the assumption that a certain crop will be applied (i.e., the entire picture is output but is framed for a particular image area—anything outside of this area should be ignored). However, in these situations, it’s recommended that material always be output already cropped to avoid potential confusion.

7 This can be controlled by shooting at a very high frame rate and then combining a proportion of frames to produce a desired frame rate. The method used to combine the frames affects the amount of motion blur and has the added benefit of reducing random noise (when used in conjunction with image-averaging methods). This topic is covered in greater detail in Chapter 14.

8 This description is somewhat oversimplified; entire books are devoted to defining proper exposure.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset