7

Conforming

In the previous chapter, we saw how a well-executed asset management system ensures that all the data for the digital intermediate—the digital images and the files supporting them—are intact and accounted for. However, the data doesn’t yet have any structure, coherence, purpose, or meaning. It must be organized to form the fully edited final piece. This is achieved by assembling the required sequences, shots, and frames, as determined by the production’s editor, in a process known as “conforming.”

In this chapter, we examine the conforming process’s capability to recreate a program that matches the production editor’s cut. We’ll discuss the various methods for conforming a program digitally, and the digital intermediate pipeline’s features that make this process a smooth one. But first, we’ll take a look at the way conforming is conventionally done for video and film productions.

7.1 Pulling it All Together

In most productions, especially large-scale productions, the editor rarely directly creates the finished product. What usually happens is that the editor (or editing team) works with a copy of all the footage to make a final edit. Because editors don’t normally have to work with footage at maximum quality to edit the program, they tend to work with lower-resolution copies, which greatly reduces the cost and time that is incurred by working with maximum quality images.

Almost all editing systems in use today are “nonlinear,” meaning that projects don’t have to be edited in chronological order. With linearediting systems, edits must be made in the order that they will be shown. To cut a scene at the end of the show and then go back and work on the beginning, you use a nonlinear system. In addition, most editing systems are video-based or digital video-based, even when shooting and finishing on film, because greater creative and productive effort is possible. Video is easier to manipulate than film, and it’s much cheaper to run off copies. The downside is that the quality is significantly lower, but by conforming, you can simply use the edit made to the video version and apply it to the original film, which results in a maximum quality version that matches the video edit frame for frame.

7.1.1 Conforming from Video

It’s perfectly feasible to edit original video material directly and produce a video master without including a separate conforming stage. The requirement to conform video arose because certain editing systems used internal compression to increase storage space and make editing video a smoother, more interactive experience. This compression significantly reduced the quality of the footage within the editing system; when the edit was completed, the editing system’s output wasn’t of sufficient quality to be suitable for the creation of the video master. A separate editing process was born, that of the online edit. The purpose of the online edit is to match the final cut produced by the editor during the offline edit, the process of cutting the show from the raw footage, to the original, high-quality source tapes, resulting in a maximum-quality, conformed version that perfectly matches the offline edit. Additional processes can then be applied, such as color grading, titling, and adding effects—processes that can’t be applied to the lower-quality offline version or can’t easily be translated between the two editing stages.

For the online edit to accurately match the offline edit, we have to know several things about the offline edit. For each frame in the edit, we need to know which tape it originated from (i.e., the source tape), where it’s located on that tape (i.e., the source position), and where in the final program it’s going (the record position). For the purpose of the offline edit, we assume that each program is to be output onto a single tape.1 This may seem a very simple way to describe a final edit, especially when it may take several months to complete the offline-editing process, but it’s usually sufficient to re-create the entire edit from the source tapes. After all, the ultimate purpose of the offline edit, in terms of the visual content, is to determine the footage’s order and timing.

The process of going through the source material and matching it to the offline edit is actually very simple, provided that the correct procedures were followed by the production crew in creating the source material. Every source tape should be given a unique identifier, typically a reel number, such as #1033. Adding the number can be done after the footage is recorded onto the tape or before the editor captures the footage in the editing system. The offline-editing system itself must have a way of identifying each tape upon capturing from it. This is normally achieved by asking the editor for a reel number each time a new tape is inserted. The offline-editing system is thus able to track, for each frame of footage, the tape it originated from.

When tracking each frame to a precise location within a tape, editing systems rely on the video timecode recorded along with the picture. The timecode is essentially a system for counting frames, but it usually divides the frames into hours, minutes, and seconds, as determined by the format’s frame rate. For instance, a tape that is 250 frames from the start, in PAL format (at 25 frames per second) is 10 seconds long. The timecode for this frame would be recorded as 00:00:10:00.

When recording a tape, the timecode can be arbitrarily assigned. That is, the timecode can be set to start at 00:00:00:00 or 12:00:00:00. It can be synchronized to another source, such as a timecode from a music track or even the time of day. The only requirements are the timecode must run continuously (in a regular, linear fashion), and it must be unique within that tape (so that no given timecode occurs twice on the same tape). Video productions also tend to shoot each new tape with hourly increments to the timecode, so that the first tape starts at 01:00:00:00, the second at 02:00:00:00, and so on. Timecodes with hourly increments are useful in tracking and checking, because it complements the reel number assigned to the tape (which is much easier to change or misread).

The editing system also tracks the timecode for captured footage, as it does with the reel number. Most offline-editing systems do this automatically by reading the recorded timecode directly from the tape. The online-editing system actively “seeks” a desired timecode on a tape by reading the timecodes as it spools through the tape. Together, the timecode and unique reel number information attached to each frame make every single frame in the production individually accountable.

With the steadily increasing power and capacity of modern editing systems, many video productions eschew the need for separate online and offline-editing systems and can edit throughout at a maximum (or at least acceptable) level of quality. Many higherend productions, such as music promos or commercials, have an online-editing stage purely to take advantage of the sophisticated effects and titling tools available. At the present time, however, the need to separate the offline and online edits is growing as more productions turn to high-definition video creation. HD editing requires a much higher level of resources than SD video editing, so many productions are choosing to offline edit the HD material at a much lower resolution, typically at SD video quality, before online editing from the original HD source material.

Offline Edits From Dubbed Tapes

It’s likely that the offline edit wasn’t created from the original tapes but instead from copies (or dubs), which may also have gone through a down-conversion or format-conversion process. It’s vital that these dubbed tapes retain all the timecode information in the originals, and that they’re given the same reel numbers as the originals. This ensures that the footage is correctly tracked during the offline edit.

In addition, all tapes used for both the online and offline edits should be free from timecode “breaks,” where footage loses timecode information or the timecode goes out of sync with the picture (but the picture content remains intact) and should have sufficient run-up time before (known as “pre-roll”) and after (“post-roll”) each recording. Problems with either timecode breaks or insufficient pre-roll or post-roll can usually be corrected by dubbing the footage onto a new tape.

7.1.2 Conforming from Film

Film material is almost never edited directly. The overriding reason, out of several possible ones, is that handling an original negative tends to degrade it (the more you handle film, the more you subject it to damage such as scratches) and places it at risk. There’s a risk that you can damage original video tapes by handling them excessively, but the overall risk with video tapes is much lower than with film. Because of this, editing film always involves two separate processes: creating the desired cut and applying that cut to the original negative.

Before the rise of video-editing systems, film editors first made copies of the original negative. Duplicating film is usually done optically, exposing a new piece of film by shining light through the original. Exposing film usually involves inverting the picture, black becomes white and vice versa—hence, the term “negative.”2 Therefore, copying a negative produces a positive (i.e., a print), and copying that positive produces another negative. Due to generation loss, the new negative is of lower quality than the original. A limited number of positives (usually just one) are created from the original negative to avoid handling the original more than necessary. All other prints used for editing and viewing purposes (i.e., work prints) are usually created from a negative that is created from this positive, just as an offline-video edit normally produces a lower-quality picture than the original source video.

A film editor makes cuts using scissors and glue, either with prints (which can be viewed directly on a projector or a desk-based viewing device, such as a Steinbeck) or negatives (in which case a print is created from the cut negative). As with offline video-editing, once the final cut is complete, the edits are matched back to the original negative, which is cut in the same way to produce the finished movie. For this process to work correctly, there must be a way to relate the cuts made to the copies back to the original, without having to rely on an editing machine to track changes. Fortunately, motion picture film provides an easy way to do this.

Every frame of motion picture film created for use in cameras has an associated serial number called a “key number.” The key number is simply a number that runs along the outside edge of the film that denotes the film stock and batch number, but most importantly, it assigns a completely unique code to every frame of film ever produced. Unlike video tapes, where two different tapes can share the same timecode, key numbers give each frame a value that distinguishes it from every other frame. You could compare all the film footage shot for the entire “James Bond” series and not run into a duplicate key number. And because key numbers are exposed onto the film (just like the actual image), they survive being copied onto new reels of film.

So, for a film editor to accurately match the final edit to the original negative, the editor creates a list of key number values for the cut. The same cuts can then be made to the original negative, ready for processing and duplication for cinema release. However, cutting film in this way, although simple, isn’t particularly fast or efficient. It’s a nonlinear process (i.e., you can make changes to previously made edits), but it requires careful organization. When you want to find a particular shot, you either have to know exactly where it is or have to wade through large bins of film reels. And thus, the film-editing paradigm is merged with the video-editing paradigm to some degree. Rather than generating new reels of film for editing, the original negative is telecined to a video format suitable for editing, usually as soon as it has been processed.3 Then the negative is safely stored away, and the editor can use a video-based editing system to cut the movie together.

Modern motion picture film normally includes machine-readable key numbers stamped next to the printed key numbers, so suitably equipped telecine machines can “burn” it into the picture and thus it’s permanently displayed over the footage. In addition, a log file can be created to marry the video tape’s timecode with the film’s key numbers. The editing system reads in the log file when capturing the corresponding video footage and ties the footage to the key number internally. Once editing is complete, the editing system produces a list of cuts with corresponding key numbers. The original negative can then be cut together using this list to produce an identical cut at the highest possible quality. Clearly many parallels exist between using this system and using the offline/online video-editing process.

However, there are a few issues with this method. First of all, key numbers aren’t necessarily accurate to exact frames. This is because key numbers are only marked every four frames, and it may be impossible to determine the precise frame for small edits. The biggest problem concerns the difference in frame rates between film and video.

7.1.3 Pulldowns

Film runs at 24 frames per second, whereas NTSC video runs at 29.97fps. Effectively, the video version runs 25% faster than the film version, which causes problems, especially when trying to sync sound to picture. So that the picture plays back at the same speed on video as on film, you must make a frame-rate conversion. The most common method for doing so is the 2–3 pulldown. Using this method, five frames of video are used for every four frames of film, so that the first frame of film is recorded onto one frame (i.e., two fields) of video, and the next frame of film is recorded onto three fields of video; this process is repeated for subsequent frames.4 The order of this cycle is known as the “cadence.” The net result is that the video’s effective frame rate becomes 23.976 frames per second.5 Although this process may not seem too complicated at first, it can be difficult to work out exactly which frame of video corresponds to the same frame on film by the time the final edit has been completed. That’s because it’s perfectly possible to cut on a frame that actually corresponds to two frames of film or the previous or next frame from the one expected, depending on the cadence. The process of working out how a video frame corresponds back to the film is known as an “inverse telecine” and is normally handled automatically by the editing system. The output of the editing system may, however, require some tweaking by the editor. It’s likely that this problem will be greatly reduced by new 24p video-editing systems, which run at the same speed as the film (and without any interlacing).

images

Figure 7–1   Film material must go through a pulldown to allow editing on video and an inverse telecine to match the edited video back to the original negative

7.1.4 EDLs and Cut Lists

Both video and film final cuts can be described as a simple list of “events” (i.e, individual, uninterrupted elements of footage, such as a shot that runs for five seconds before another one is cut to). For video, these events are edit decision lists (EDLs), which contain source reel, source timecode, and record timecode information for each event. For film, a cut list contains the key numbers for the first and last frame of each edit. It may contain additional information, such as the camera reel number, the scene and take numbers, and the equivalent video timecode. Each list is designed to be machine readable as well as human readable. Each edit is described in plain text on one line. As with many components of post-production, dozens of different formats are designed for use with different systems. A system usually is able to interpret specific formats, which can potentially lead to compatibility issues. Fortunately though, by their nature every EDL format is normally structured fairly simply, so it’s usually a trivial matter to convert between formats, even if this must be done manually.

The entire feature’s final cut can be expressed in a thousand lines or so (depending on the number of edits), which makes the EDL perhaps the smallest piece of data in the whole filmmaking process, as well as one of the most important pieces.

The Sony Cmx 3600 Edl Format

Since EDLs are so crucial to post-production, it’s important to be familiar with them. Although many different formats are available; the Sony CMX 3600 is probably the most widely used. The Appendix contains an example of a CMX 3600-formatted EDL, as well as a breakdown of its structure.

7.1.5 Transitions

Transitions are used to bridge two different shots together. The simplest and most common type of transition is a cut. A cut involves the first shot ending on one frame, followed by the new shot starting on the next frame. Skillful editing hides this somewhat harsh transition. The audience accepts the drastic change in the picture without even realizing it.

At times, a cut doesn’t work or may ruin the pacing of the picture. Editors have at their disposal several alternatives to using a cut, whether working on a film-based project or a video-based one. A fade-in (or fade-up) involves gradually raising the level of the shot over the length of the transition. At the start of the shot, the picture is completely black, and the level increases until the complete picture is visible, which occurs at the end of the transition. At that point, the shot continues playing normally. The opposite occurs in a fade-out: as the shot nears its end, the image level decreases, reaching black on the last frame. An extension to this process and the most common type of transition after a cut is a dissolve.

A dissolve gradually drops the level of the outgoing shot while raising the level of the incoming shot. The result is that the first shot seems to blend into the second. The duration of the dissolve can be varied, providing that there is enough footage for the overlapping area. The longer the dissolve, the more footage is required at the end of the outgoing shot and at the beginning of the incoming shot.6

Wipe effects (or wipes) are similar to dissolves in that two adjacent shots overlap. Whereas a dissolve blends together two complete images, a wipe superimposes one shot over the other using an animated pattern. Over the course of the wipe, the incoming shot increasingly dominates the overall picture. For instance, using one of the simplest wipe effects results in the new image appearing to “slide in” over the previous shot.

Most transitions can be encoded into an EDL and should carry over from the offline edit to the online edit. In addition, the online-editing system invariably has methods for fine-tuning the transition. For example, dissolves may use different methods to blend together two shots, and you can adjust the rate at which the images dissolve. Wipes are usually specified by SMTPE-standard codes to determine the pattern used, but they too can normally be manipulated during the online edit to create more specific patterns.

images

Figure 7–2   A dissolve effect smoothly blends two shots

images

Figure 7–3   A wipe effect gradually slides in one image over another

The reason dissolves and wipes are normally fine-tuned during the online edit rather than the offline edit is because such detailed settings don’t get conferred into the EDL. The EDL records only the transition type and duration (and the base pattern number in the case of wipe effects).

7.1.6 Motion Effects

In addition to changing transitions, editors also have the capability to adjust the speed of a shot; this process is often referred to as “speed changes.” A shot can be set to run slower or faster than its original recording speed. For instance, film footage that was recorded at 24fps can instead be played back at 48fps (200% speed), making the footage appear to move twice as fast. Or the film can be played back at 12fps (50% speed), making the footage appear to be moving in slow motion. It’s also possible to completely reverse a shot. The actual mechanics of this process vary according to the online-editing system. The simplest method for performing a speed-change operation is to repeat or drop, as required, a proportional number of frames from the shot. So, a shot at 24fps that is speed-changed to 200% might drop every other frame, while the same shot at 50% speed might repeat each frame once.

Changing the speed of a shot need not be linear. Rather than just applying a constant speed change to a sequence of frames, it’s also possible to apply acceleration or deceleration to footage. This process, known as “time-warping,” gives the editor greater control over the way a shot progresses. Using time-warping software, it’s possible to start a shot in slow motion, gradually build up its speed until it’s playing faster than originally recorded, and then even reverse it.

When performing a speed change, the resulting motion might not look smooth, especially when the scene contains a lot of fast motion. A more suitable option under these circumstances is to use motion interpolation software to generate new frames. Motion interpolation can be performed using several different options. The simplest involves blending frames. The amount of blending can be controlled to “weight” the motion and in doing so, can approximate motion between frames. A more advanced method involves the use of a motion analysis solution, such as RE:Vision Effects’ Twixtor software (www.revisionfx.com), which can provide an even greater level of accuracy by breaking down and analyzing the movement of objects within a frame.

Another simple motion effect is the freeze frame (or frame hold). This effect is created by repeating a single frame for a predetermined length of time.

It’s possible to combine motion effects with different transitions—for example, a freeze frame might be combined with a fade in, or a speed change with a dissolve. Again, the EDL contains only a limited amount of information about motion effects: typically just the length of the effect and the new speed. It’s also worth noting that it may not be possible to create certain motion effects optically with film. Specifically, it isn’t possible to repeat frames (e.g., or even entire shots in a flashback sequences) without first duplicating the required frames. Video (and data), on the other hand, can be duplicated by the online-editing system.

images

Figure 7–4   RE:Vision Fx’s Twixtor Pro Software (shown here with Autodesk’s Combustion Interface) enables you to retime digital footage in a nonlinear manner—in this case, compensating for interlacing and motion blur

7.1.7 Handles

In creating a production, it’s inevitable that more footage will be shot than is used in the final edit. Typically the ratio of the total length of all footage shot to the length of the final cut (i.e., the shooting ratio) can be 10:1 or higher. This is mainly due to the number of takes and multiple camera angles used within a single scene. But even within a single take, it’s rare that the entire duration of the filmed footage will be used. For every shot in a film, footage filmed before and after usually is edited out.

During post-production, editors like to allow themselves a margin of error—that is, at some point in the future, if they need to lengthen a shot by a few frames, they make sure that the extra frames are available on the editing system. These extra frames are called “handles,” and for every event cut into the final picture, an extra number of these handle frames usually is available before and after. Editors normally have 4–15 handles per event during the final stages of editing, depending upon the requirements and needs (and budget) of the production.

7.1.8 B-Rolls

Before the days of nonlinear video-editing, editors (both for film and video) had to put all material needed for effects such as wipes and dissolves onto a separate tape (or film reel), which was known as the “B-roll.” The first part of the effect (i.e., the outgoing shot) is cut into the program as usual (the A-roll). Both rolls also contain the extra frames for the overlapping section. When the time comes to create the effect, the A-roll and B-roll are combined onto a separate tape, which then has the completed dissolve or wipe. B-rolls are also used to hold the original-length shot prior to a motion effect. Though B-rolls are not necessary for modern nonlinearediting systems, they’re still a necessary stage for editing film, and as we shall see later, they may also be a requirement for digitalconforming systems.

7.2 Digital Conforming

Digital-conforming systems, particular those used in the digital intermediate process, share many features of both nonlinear video onlineediting systems, while empowering film-based projects to utilize many of the advantages of video-based projects.

7.2.1 Digital Conform Systems

Although many manufacturers of digital-intermediate-conforming systems strive to include a high number of features, the actual requirements for a viable data-conforming system are very limited. Basically, the fundamental requirement of a digital-conforming system is that it must be able to reorder a set of digital image files according to the final offline edit (although even this process may not be necessary for certain situations) and output them without altering the content in any way. A simple command-line computer program is sufficient to satisfy these requirements, and in fact, this capability might be integrated into the data management system used by a digital intermediate facility.7 In practice though, conform systems are feature rich, incorporating an iconic or other graphical representation of the conformed data, as well as providing many other options available to video-based online-editing systems.

In practice, the digital conform tends to be an ongoing process. Throughout the digital intermediate pipeline, shots within the program are constantly being updated or replaced, and new material constantly being acquired into the pipeline.

Digital-conforming systems can be separated into two broad categories: the “modular” system and the “integrated” system. A modular digital-conforming system relies on other components of the digital intermediate pipeline to perform various tasks. In a modular system, data is input into the system (usually through the data management system), which then conforms it and outputs the conformed data (or specific shots and frames) to a number of other systems. For instance, an external system might apply effects, such as a dissolve or a filter to a number of shots, to footage supplied to it by the conforming system (possibly via the asset management system), and supply the newly generated footage with the desired effect applied back to the conforming system (again making use of the asset management system if necessary). Similarly, the conforming system might be constantly bouncing shots back and forth between color-grading system (assuming color grading is being used on the production), as well as any restoration systems that may be being used, continually replacing original acquired material with color-graded finals. Perhaps the conforming module is the final stage in the pipeline, in which case, once everything has been completed, the conforming system sends the final, conformed data to an output module. This method provides some distinct advantages, particularly the ability to incorporate collaboration into the pipeline. One operator can be dedicated to a particular function, such as conforming the data, while another is responsible for the color grading, and so on. This also means that operators with particularly strong skills in certain areas can be more effectively utilized. There are a few drawbacks to using this method. First, it can be a slow process, because having to constantly transmit data between different modules requires time and computer resources, including additional disk space. By extension, this method is also more expensive to implement, because each subsystem is usually installed on a separate workstation. Finally, it may be difficult for the system to demonstrate a sense of priority so that the modules that require full, unrestricted access to the latest data always have it.

The integrated system takes the opposite approach to conforming data. Rather than assigning different tasks to specialized subsystems, the conforming station also provides the required controls for all the necessary tasks. For example, in addition to being able to assemble frames and shots in the correct order, the same system can also provide tools to color-correct and apply effects and other processes. It’s still possible to incorporate networking and multioperator paradigms when using this method; it just means that each operator has all the required tools for performing any task. Successfully utilizing this approach usually requires incorporating strict access control procedures, to ensure that the right operators have access to the right data at the right time, and that the access privileges don’t conflict with those of another operator. This approach tends to be best suited to smaller-scale digital intermediate productions; each workstation required normally incurs considerable expense, due to the abundance of software-based tools and the high-performance hardware that might be needed to run them. But in using such a system, even a relatively small team, possibly in less time, can produce quality results just as those produced by a larger module-based team.

images

Figure 7–5   With a modular-conforming system, tasks such as grading are handled by separate systems

An extension of this paradigm is having a single all-in-one solution. For small or low-budget productions, it may be possible to complete the digital intermediate on a single system. As an extreme example, an editor who has completed the editing of a DV production has, in effect, conformed the data at the maximum resolution already (most DV-editing systems capture DV at full quality). The editor is usually able to use the same (offline) editing system to perform similar (if not as fully featured) operations to the footage, such as rudimentary colorcorrection and titling. Using this single system, an entire production can be edited from scratch, finished digitally, and even output to a variety of formats.

images

Figure 7–6   With multiple integrated-conforming systems, each system has a full set of tools for working with images

images

Figure 7–7   With an all-in-one system, a single workstation is used to finish a production

Clearly this approach isn’t applied to larger-scale productions for good reasons. For one thing, it takes a lot longer and requires a highly skilled operator with a multitude of capabilities, but it demonstrates what is actually possible. In practice, the conforming system inevitably is some combination of the two paradigms, which may in turn depend upon the production’s requirements.

A popular compromise is to combine the conforming system with the color-correction system, so that both systems take precedence over all other components in terms of resources. In addition, an operator is provided with the features of both and the ability to work with the most up-to-date versions of each shot.8 For example, Autodesk’s Lustre system (www.discreet.com) combines data conforming, color grading, and effects capabilities, with restoration and output. In addition it has the capability to directly acquire video into the system. Such systems typically can send data to external modules if required.

In addition to the specifics of the conforming system, consideration must also be given to how the data is to be conformed.

images

Figure 7–8   Autodesk’s Lustre can be used as an end-to-end digital intermediate system

7.3 Digital Conform Paradigms

Each of the many different methods for conforming data is typically best-suited to a certain scale of production. They’re normally driven by EDLs or cut lists; it may be that simpler conforming systems require manually created edits, which isn’t recommended for lengthy productions containing numerous events.

7.3.1 Fine-Cut Conforming

The fine-cut paradigm is the simplest to implement. It assumes that all the source material fed in is already in the correct, final-cut order. When this is the case, all the data is supplied without any handles, with each output reel matching exactly what has been supplied, cut for cut. Conforming a reel of this type merely involves loading the supplied data into the conforming system. Because the supplied data is already in the correct order, no additional operations are required. The conforming system effectively treats the entire cut as one long shot. Therefore, it may be necessary to divide the conformed data back into its original edits, particularly when more work has to be done on individual shots. When this is the case, the conforming system is responsible for distributing individual shots to other systems—for example, for color grading. Segmenting data back into original cuts may be done either manually, by inputting the cut points directly into the conforming system, or automatically, with the use of a production-supplied EDL of the final cut or the use of a cut-detection algorithm, which detects sudden changes in the contents of the picture and marks them as cut points.

For dissolves and other transition and motion effects, the cut points may have already been performed and included within the supplied data, in which case no additional work is required. However, any shot supplied that includes dissolves or wipes can’t be separated into its constituent elements—meaning that the dissolve parameters (e.g., the duration or profile) can’t be altered, and the two adjacent shots must be treated as a single shot. Similarly, motion effects can’t be undone; they can be retimed, but it normally results in a reduction of temporal quality.

Alternatively, the fine-cut data may be supplied as an A-roll, with a separate B-roll containing the additional elements, in which case, the conforming system must either create the desired effect itself or include an effect completed by some other system—for example, a visual effects workstation. The inclusion of such effects can be managed by the use of EDLs, or it may be more convenient to make the inclusions manually.

Material is usually supplied as fine cut for film-based projects where the original negative has already been consolidated, or for video-based projects where the video has already been through an online stage (including situations where the offline-editing system works at maximum quality and is therefore able to output the final cut piece at maximum quality). The production may have been previously considered finished, but for some reason, it currently requires going through the digital intermediate process. A good example of this case is productions that are digitally remastered, meaning that the cut remains the same as the original but then undergoes the color-grading and restoration process digitally, before being output to the required formats.

However, the extremely limited amount of provided footage means that it isn’t possible to add any extra footage (such as for a recut) without providing separate material (and hence incurring additional expense and requiring more time).

images

Figure 7–9   When conforming from fine-cut material, little or no editing is necessary

7.3.2 Conforming from Original Source Reels

An alternative approach is to conform a program from all the original footage that was shot. The original source reels (also referred to as “dailies” or “rushes”) can be acquired into the digital intermediate system so that the material available to the conforming system matches the material available to the offline-editing system. Assuming the same naming conventions (reel numbers and timecodes) were used on both the offline-editing system and within the digital intermediate pipeline (which they should do), the EDL generated by the offline-editing system can be used by the conforming system to automatically and accurately select the correct footage from the source material and cut it together within the digital intermediate. One of the benefits of this approach is that the acquisition stage can begin as soon as the footage is created. As soon as any material is shot (and processed, if necessary), it can be acquired into the digital intermediate system, ready to be conformed once the editing is complete. Doing so also facilitates the creation of so-called “digital dailies,”—that is, digital copies of the filmed material for viewing purposes (e.g., output to video tape or transmitted via the Internet). However, it may be far too expensive to store so much footage, especially if it won’t be accessed for a long time (typically the time required to edit the footage from the time it was shot can be several months). It need not all be acquired straight away of course, but by the time the edit is nearing completion, most of the original footage (around 90% of it) is redundant.

To solve this problem, it’s possible to acquire only the needed footage, but this solution can still be time-consuming, because many reels of footage must be gone through, with each reel containing a small amount of relevant material. The act of physically changing the reels, combined with the potentially slow acquisition speed of the system (particularly in the case of film-based productions), largely contribute to the time requirements. On top of that, many issues are associated with physically storing the original footage at the digital intermediate facility doing the acquisition—issues such as insurance expenses to cover theft or damage.

7.3.3 Conforming from Master Source Reels

A good compromise between both of these situations is to create “master source reels,” which are basically reels that contain a compilation of all the footage required for the final cut (and usually include some handle frames as well). Creating master source reels greatly reduces the total amount of footage to be acquired and managed, while retaining enough flexibility to allow some degree of recutting.

images

Figure 7–10   When conforming from rushes, the material must be edited to separate final shots from unused material

The difficulty in implementing this scenario successfully is that additional EDLs must be generated that reference the new master source reels as if they were the original material. When a new master source reel is created, the footage is given a new timecode (and reel number) to reflect the new location on a new tape (otherwise, a jumble of different timecodes would be on each new tape, and you would have no way of referencing the original reel numbers). This isn’t necessarily true for film-based projects that are conformed using key numbers, because the key numbers remain intact even after being transferred to another reel. But most film-based projects are actually conformed using a timecode reference, and therefore they’re subject to the same issues as video-based projects.

The new EDLs must be generated by taking the final (offline) EDL and replacing the source reel numbers and timecodes with the new reel numbers and timecodes as they appear on the new reels. This is a huge task to undertake if done manually; however, certain systems exist (in particular, those that can create the master source reels in the first place) to manage the EDL conversions automatically.

images

Figure 7–11   Conforming from master source reels requires scanning less material than working directly from rushes, and it affords more flexibility than fine-cut material

7.3.4 Conforming Data

As discussed previously, the preferred way to conform data is using an EDL. All the data within the digital intermediate system is indexed to distinguish each frame. A few methods can perform this process (they’re covered in Chapter 6). The most common method is to assign each frame a source reel number and timecode. This way, if done correctly, the data can be referenced in exactly the same way as the offline-editing system references its own source material. Thus, the generated EDL is as valid on the digital system as on the offline-editing system. Therefore, the conforming system can process the data, locate the equivalent digital frames, and sequence them correctly.

Another method for conforming data is to organize the data on the disk according to the events in the EDL. Each EDL event corresponds to a different folder (which can follow the same numbering system of events as the EDL follows). The frames within each folder have to be ordered correctly, however, and managing handle frames might prove problematic. In addition, this method doesn’t allow for program recuts or replacement EDLs to be supplied at a later date.

The other alternative is to manually assemble the shots. The benefits of such a system are that you don’t need to maintain strict, consistent naming conventions, and in fact, this method can prove to be very fast in situations with a few events (e.g., where fine-cut material has been supplied), provided the conformist is familiar with the production. Unfortunately, this method is prone to inaccuracies and tracking problems, and it isn’t suitable for a large-scale production. In any case, this method of conforming becomes too tedious to use for programs with many events.

images

Figure 7–12   Data can be conformed by using an EDL to match the digital material to the offline edit by timecode and reel number

images

Figure 7–13   Data can be conformed by matching digitized shots to individual events in the EDL

Other options may be available. Systems that are configured to specifically allow it might be able to transfer data directly from the offlineediting system. If the data in the offline system is already at the maximum quality level, it may be possible to output the image data as a single file, “packaged” within an EDL of sorts. Loading this file into the conforming system automatically positions the images in the correct order, optionally including handle frames if necessary, as well as the ability to separately dissolve elements. The main reason this system isn’t used very often is because there are many compatibility issues. In addition, it can take longer to transport the large amount of data than to simply transport an EDL and thereby acquire the image data. However, software such as Automatic Duck’s Pro Import line (www.automaticduck.com) allows the timelines created in offline-editing systems to be translated to finishing systems, rebuilding effects, transitions, and layers more seamlessly than when using EDLs.

images

Figure 7–14   Data can be conformed by manually editing material together

7.3.5 Referenced Data

Because facilities tend to use different systems for each stage in a digital intermediate pipeline, particularly for large productions, a great deal of data transport and tracking is required, which can be slow and at times difficult to manage. The usual workflow is to designate a sequence within the conforming system that must be processed on a separate system, send it to that system, complete the effect, and bring it back into the conforming system, replacing the sequence that was there before. If further changes have to be made, the entire process must be repeated.

A much more powerful paradigm is to use referenced data. Using this approach, data isn’t copied back and forth between separate systems, but instead, the conforming system uses pointers to the location of the desired footage. For example, let’s say you have a separate system for creating dissolves, outside of the conforming system. With data referencing, you send the two parts of the dissolve to the external system, which then applies the dissolve. Rather than conforming the new data back into the conforming system, a reference is created, pointing to the location of the completed dissolve. During playback, the dissolve appears to be in the correct place, but if any alterations are made to the dissolve, the changes propagate back into the conformed program without having to do anything.

This concept can be expanded even further. It’s possible for every system to rely purely on references. In the previously mentioned example, the dissolve system can obtain footage directly from the conform system as a reference, so if changes are made to the original footage, these changes are automatically reflected within the dissolve system and then passed on directly back to the conform system as a completed dissolve.

Though such a system can potentially reduce the overall disk space requirements of the pipeline, in reality, many of the files have to be cached. Therefore, no storage space benefit is obtained. Without a caching mechanism, the process could end up being very slow, especially when many levels of referencing are used. In addition, it requires careful management, because it is all too easy to delete footage that seems to serve no purpose, only to find out later that it was referenced by a different process.

images

Figure 7–15   When using referenced data, complex shots can have many levels

Referencing systems tend only to exist in pipelines that have a common infrastructure, when facilities have designed their own digital intermediate systems themselves, or when facilities have bought a number of different systems from the same manufacturer who has included this functionality. Because no real industry standard for creating data references has been established, this capability is almost nonexistent among systems designed by different manufacturers; although occasionally, a third party steps in to supply software to bridge the gap between two systems.

7.3.6 Conforming Video Using Timecodes

Conforming video footage using timecodes is a fairly straightforward process. When an EDL is supplied to a capable conforming system, the system cross-references the reel numbers and timecodes in the EDL to the image data saved on the system. The precise mechanics of this process depend on the specifics of the conforming system and the data management paradigm. The most popular method involves storing footage within folders on the disk that correspond to the video reel the images were sourced from. Within the folder, each frame is assigned a frame number, from which a timecode can be derived using a simple formula (see Chapter 6). Because the offline edit is created using exactly the same timecode, the EDL produced can be input directly into the conforming system. (The EDL can’t be directly input into the conforming system when master source reels have been created as an intermediate step, in which case, EDLs must be generated specifically for this purpose.)

Scene Extraction

Some digital video systems have the capability of performing scene extraction, which enables the system to somewhat automatically divide a video reel into cuts. The way that this process works is by comparing the recorded time of day against the timecode. Where a break appears in the time of day (or a break in the timecode), such as when the recorded time of day jumps from 12:13:43:21 to 12:17:12:10, but the timecode remains continuous (e.g., going from 01:00:13:12 to 01:00:13:13), the assumption is that recording was stopped (or paused) to provide an opportunity to set up a new shot (or new take), and the conforming system adds a cut point to the footage. This simple, yet highly effective technique will presumably continue to be used across a number of different imaging systems and provides an efficient way to quickly separate shots in a reel. Unfortunately, no equivalent system exists for film reels (at least, not yet).

7.3.7 Conforming Scanned Film Using Key Numbers

When conforming scanned film, it may be more convenient to conform the image data automatically by the film’s key numbers, rather than another, more arbitrary parameter. Because each key number is unique, there is less chance that the wrong frame is loaded into the conforming system.

For this to work correctly, the digital-conforming system must be able to interpret an editor’s formatted cut list (which lists the edits in terms of key numbers rather than timecode). The system must also be able to extract key number information from the digital images to be loaded in. Key numbers are either embedded into each frame’s “header” or may be encoded within the filename.

At the present time, most film-based projects aren’t conformed automatically using key numbers due largely to the limited ability for encoding key numbers into scanned images, coupled with the limited support for conforming key numbers within most digital-conforming systems. In addition, the accuracy of key numbers can’t be guaranteed, particularly when the film is supplied as master source reels rather than an uncut, original camera negative.

7.3.8 Assigning Timecodes to Scanned Film

An effective, accurate method for digitally conforming film involves assigning a reel number and timecode value to each frame of film, as if it were a video tape. This process is usually as simple as assigning a reel number to each reel of film. Within each reel, a single “marker” frame is used as a reference point. This marker frame can be any frame on the reel, but for convenience, it’s normally located just before the start of the picture. The frame is marked physically in some way, normally by punching a hole through it. This punch-hole frame is then assigned a specific timecode (for simplicity, it’s usually assigned a timecode such as 01:00:00:00). The timecodes for every other frame on the reel can then be derived by counting forward (or backward) from the punch hole. The frame rate of the offline video-editing system must be taken into account so that both the offline and the conforming systems use identical timecodes.9 Then, it’s simply a matter of inputting the EDL produced by the offline system into the conforming system, in exactly the same way a video-based project is conformed.

The downside to this approach is that the timecode system must be implemented before a telecine of the film is made for offline-editing purposes. Where this isn’t possible, or where master source reels are used for scanning (rather than using the original, uncut camera negative), the offline EDL must undergo a conversion process to correlate the telecine source timecodes to the new timecodes generated by the master source reels. This process can be a complex procedure. However the negativecutting facility responsible for compiling the master source reels is often suitably equipped to automatically create the new EDL.

Ultimately though, this system works only when the timecodes used for scanning the film correspond directly to the offline EDL supplied by the editor (even if timecode conversions must be done first).

Acquisition Edls

EDLs can be used for many operations in the digital intermediate pipeline, in addition to accurately conforming a program. EDLs are commonly used to designate the specific frames required for acquisition. For instance, a “capture EDL” for video, or a “scan EDL” for film can be input into the system, listing only the timecodes for the footage that is required.

Most video capture systems have a “batch capture” mode. Using this mode allows multiple shots across multiple tapes to be compiled into an EDL. When ready, all the shots listed in the EDL are captured into the system at once. This process can be performed unattended, with the operator only needing to change reels as required.

From within the conforming system, EDLs can be used to pick out specific shots or frames. Most commonly, an EDL lists frames that require restoration. The conforming or data management system processes the EDL and extracts the frames listed in the EDL (optionally including handle frames). After the frames have been corrected, they can be reintegrated into the conforming system.

Edl Mathematics

On many occasions, an EDL has to be adjusted to solve a particular problem in the digital intermediate process, or to enable compatibility between separate systems. The simplest such operation might be replacing the reel number for all instances of the reel (e.g., changing all edits with reel 033 to 034). Other operations might include adding an offset to particular timecodes, removing events, or even converting between frame rates. Another useful function is the ability to merge EDLs.

Many of these operations can be done by manually editing the EDLs, but some with lengthy and/or complex calculations might require using a computer system. This kind of functionality is often built into the conforming system, or it may be obtained through separate software solutions.

7.4 Playback

The conforming system is generally considered to be responsible for the playback of a finished production. This convention is used more because of convenience than necessity; the conforming system is the first place that the entire, full-quality finished film comes together. So isolating problems at this stage makes more sense than waiting until the data has been distributed to various other components. Similarly, the color-correction system is often combined with the conforming system, and the capability to view color-corrected changes to the production at the maximum level of detail, and at speed, should be considered essential.

Digital image playback systems are notoriously difficult to implement, especially at film resolutions (of 2k and higher). The sheer volume of data that has to be processed is enormous—three minutes of film data is equivalent to an entire DVD of data. The footage has to be viewed in real time at the same level of quality as the final output medium, so that you can see it as the audience eventually will see it, which means that three minutes of film data has to be played back in exactly three minutes—no faster, no slower, and without using any tricks such as dropping frames to maintain the frame rate.

With DV-based productions, inexpensive computer systems can process the associated data very reliably, and many high-end systems can play back HD video-quality data without any problems. Uncompressed 2k data playback, however, often requires a series of dedicated hardware and software components. Systems such as Thomson’s Specter FS (www.thomsongrassvalley.com) can conform and playback 2k data from networked or local storage in real time.

images

Figure 7–16   Thomson’s Specter FS Is a hardware-based system for real-time playback of up to 2k image data

Some conforming systems work by displaying proxies (i.e., downsized copies of the original images) rather than the full-size originals to increase the system’s speed and responsiveness; such systems normally also have a method for switching to view the full-size images. Assuming this process has been properly implemented, one of the great strengths of a digital intermediate pipeline is its capability to instantly seek (i.e., go to) any frame in the entire production. Rather than having to search through stacks of tapes or reels of film and then spool through them to find the desired frame, this can be accomplished in a digital system simply with the click of a mouse. In addition, most digital-conforming systems provide many other features to increase productivity, such as timelines, resolution independence, vertical editing, and split-screening.

7.5 Digital-Conforming Interfaces

One of the main functions of a digital-conforming system is to allow manipulation of conformed material and provide feedback. Digital conforming can theoretically be achieved with very limited user input and without any graphical representation of the system’s contents, but most systems provide at least some rudimentary display of their contents and current status.

The data that has been conformed can be represented in many ways, and different software solutions inevitably use different approaches. Perhaps the simplest type of interface is to use a storyboard display mechanism. With this method, each event is represented by a thumbnail image, showing a still of the event’s contents. This method is similar to having a thumbnail of every event, in the order that the events occur, in the finished production. This is useful in ascertaining the production’s overall progression, but it doesn’t tell you whether a single event actually comprises several elements (e.g., in the case of an elaborate crane shot) or indeed the length of each shot.

A more useful option, therefore, is a timeline. Timelines are common in most nonlinear-editing systems. Basically, a horizontal timeline runs along the interface, representing the entire program. Each event is positioned on the timeline according to when it occurs, and the event is normally displayed as a small rectangle, its size indicating the length of the event. Sometimes thumbnails are drawn on top of the events in the timeline, but timelines normally don’t include thumbnails, to increase the display speed. A position indicator usually shows the current frame being displayed.

Timelines are generally considered the most flexible and useful method of representation for a conforming system, and anyone who is familiar with a nonlinear-editing system (which most conforming systems operators are likely to be) will find using them intuitive. This system, though fairly simple, can be embellished further still. In applications such as Quantel’s Qedit (www.qunatel.com), it’s possible to zoom the timeline’s display, to get an overall sense of the program’s progression, or to focus on a specific detail (e.g., the timecode when a particular event occurred). It may also be possible to highlight events in a different color to distinguish certain features—for example, showing all events with attached motion effects, or even for project management purposes, providing the capability of highlighting sequences that haven’t yet been approved by the production. Other options, such as the ability to attach notes or comments to specific events or timecodes, expand this interface further. The only disadvantage to this type of display is that it can be difficult to obtain detailed information about a specific event, such as the shot’s source timecode.

A more advanced method is to display the raw data for the conform—for example, in an EDL-formatted list displaying a position indicator next to the event currently being viewed. A multitude of information can be derived from this type of interface, from the duration of each event, to information about the digital image format. The problem with this type of interface is that it isn’t particularly intuitive for quickly browsing through a scene.

The different interface types can, of course, be combined together, so that an EDL-type interface might have thumbnails (thus expanding upon the storyboard interface), or it may be possible to “call up” detailed information for a specific event in a timeline environment. Alternatively, the conforming system may be designed to enable the operator to switch between different interface types.

images

Figure 7–17   Quantel’s Qedit offers a detailed timeline for displaying information about conformed sequences

7.5.1 Resolution Independence

Digital-conforming systems borrow a lot of functionality and ideas from nonlinear video-editing systems, but conforming data has several advantages over conforming video. One significant advantage is resolution independence. Unlike video formats, which are highly standardized, digital images can be of any dimension, which means that they can always (theoretically) be acquired at the optimum resolution. So, film images can be 4096 × 3112 or 2048 × 1556 pixels (or any other arbitrary value), while NTSC video frames can be captured at 720 × 486 pixels. With a resolution-independent system, these differently sized images can be mixed together without having to resize any of them (thereby retaining each frame’s maximum quality level), until the frames are output, at which point the frames are resized (from the original image rather than one that has already undergone multiple resizing operations) to the desired output resolution. Conforming systems may choose to display differently sized images proportionally or show each image the same size.

images

Figure 7–18   Resolution-independent conforming systems don’t require images to be supplied at a fixed resolution

It should be noted that not all conforming systems support resolution independence and may force the operator to nominate a “working resolution,” to which all incoming data will be resized (or rejected) as necessary.

An extension of this method is the notion of format independence. As mentioned previously, digital images may be encoded in a variety of different image formats. Format-independent systems allow the combining of different format types with no ill effects. Other systems may require that images be converted to a common format. The same is also true of color spaces—for instance, it may or may not be possible to seamlessly mix images encoded in RGB color space with images in film color spaces.

7.5.2 Multiple Timelines

Rather than having a single timeline, containing the program’s final cut, it can also be useful to have secondary timelines that are synchronized to the primary one. One of the possible applications for this option might be to store different cuts for a program. A common use for additional timelines is for reference material, perhaps scans of storyboards. Even more uses are possible, such as adding alternative camera angles to be included in a DVD release.

7.5.3 Vertical Editing

The concept of vertical editing is both simple and elusive. The basic idea is multiple timelines are stacked on top of each other, rather than running in parallel. When you’re looking at any particular position in a sequence, the frame displayed is the one on the highest timeline (or track). In theory, this approach seems somewhat pointless, but in practice, it can be a very useful feature, especially for incorporating data from multiple sources.

Another option is to store different EDLs on different timelines. Corrections to a program are often made using a changes EDL, which is simply an EDL that contains just the elements that are different from the original offline EDL. Such EDLs can also be used for loading extra material, such as visual effects shots or footage that has gone through a restoration process. Loading each EDL onto a separate track can help to identify potential problems, as well as determine the origin of different material.

Vertical-editing systems allow rudimentary version control; newer shots are placed above shots that have been superseded but are still available for reference purposes. Many vertical-editing systems also allow individual tracks to be “turned off,” meaning that the contents of that track are not displayed or that they’re ignored at the output stage (or both), as if the track weren’t there. This can be useful in situations where an EDL is loaded into the system, but the material isn’t yet available to the system for viewing. Thus you can still load the reference to the events without affecting the picture. Another useful function is the ability to collapse several tracks, effectively merging their contents into a single track.

Conforming systems such as Thomson’s Bones (www.thomsongrassvalley.com) enable you to simultaneously view multiple shots, using any of several different methods. The most common and intuitive way to view two (or more) tracks at the same time is to use a split screen. Using a split screen involves dividing the picture into two (or more) sections. Each part of the image displays the contents of a particular timeline. Generally, the overall picture corresponds to the whole image, but it may also be possible to display a complete image in each section. An alternative to this approach is to display a picturein-picture, where one timeline dominates the display, while another timeline is displayed in a smaller window within the image.

Many conforming systems, in particular those that are combined with other systems (such as color grading), use tracks to add operations that affect the tracks below them. For instance, a track containing footage might lie underneath a track with a color grade setting, which in turn might lie under a track with a reposition operation. When you view the footage, you see it color graded and repositioned; however, by switching individual tracks off, you can view the footage without color-grading or repositioning or with any desired combination of operations.

images

Figure 7–19   Systems such as Thomson’s Bones enable you to place shots on multiple tracks and to view separate shots at the same time

Another function of vertical editing is to allow layering of footage.

7.5.4 Layers

In addition to stacking footage for organizational purposes, placing different shots on top of each other can be useful for creative purposes. The most common application of this approach is to superimpose one image over another. Certain shots use layering, similar to using a wipe or a dissolve, to give a split-screen or picture-in-picture effect to a particular sequence. Likewise, each layer can have transparency and other compositing parameters to enable the shots to be blended in a variety of different ways. Images that are in an appropriate format might already have individual layers stored within the file—in this case, the layers might automatically be separated onto different tracks. Similarly, image files containing alpha channel information can use it to automatically grant the images the desired level of transparency.

In addition to layering images, it’s also possible to include text or 3D objects. Furthermore, each layer can comprise an element in 2D or 3D space, the latter being used to provide perspective to the layers. And this is only the beginning. Some conforming systems can automatically generate a number of effects based upon layers, such as adding drop shadows or reflections.

7.5.5 Placeholders

It’s rare for all the required footage to be available to the conforming system when the final cut EDL is delivered. Therefore, early instances of the conforming system will invariably include gaps where footage is missing. However, rather than just showing a black screen for missing material, it’s more useful to display a placeholder of some sort, typically a static image displaying the words “missing material” or some other indicator, such as a large “X”. This approach is used to differentiate missing footage from footage that is meant to be completely black (e.g., for fade effects) and to ensure that the fact that the material is missing doesn’t go unnoticed. Sometimes the footage might be available, but it hasn’t been conformed correctly, and so a warning display is a useful indicator that something isn’t right.

Certain shots tend to be delivered later than the majority of the footage. This is particularly true of visual effects shots, which can take many months to complete. To try to minimize the impact of the delayed delivery of such shots, many digital intermediate pipelines substitute missing footage with similar footage. In the case of visual effects material, the common procedure is to substitute the absent footage with “background plates”—that is, footage that has been shot (as part of the production shoot) to serve as the starting point of the visual effect. This footage might be of a literal background scene (as in the case of chroma-key effects, which combine a blue-screen element with a background element) or some other available element.

Because such background plates are already available, substituting them temporarily for the finished shot can provide a sense of fluidity to the sequence and, in addition, can provide the color grader with some preliminary footage to start with.

Black Frames

During editing, black frames are used for a variety of reasons. They’re occasionally used for creative purposes—such as for fade effects, or where there isn’t supposed to be any picture—but more often they’re used as “spacers,” to separate different elements, especially with vertical editing. EDLs can list edits explicitly as black (BK or BLK), and video-editing systems are usually able to generate a black signal internally.

For vertical-editing-conforming systems, black frames might have two different uses. A particular event might be designated black because it’s supposed to be black, or because it’s supposed to be empty (i.e., used as a spacer). The distinction is subtle but important. If the top track in a vertical-editing timeline contains black, it could be empty, meaning that the contents of a lower track are played instead, or a black track might actually display black frames rather than the footage below it.

Most conforming systems take the approach that the presence of black in a timeline indicates the event is empty, and when no footage is available on a lower track, the material is assumed to be missing. If black frames are actually required, an operator usually has to insert them into the timeline manually.

In some systems, black frames can be generated on the fly, as needed, but in other systems, they can’t, requiring actual footage to exist in the form of black frames, which are then cut into the program in the same way as any other footage. Other conforming systems, such as those lacking support for vertical editing, might also require the insertion of black footage to create spacers between shots. Worse, some conforming systems might not correctly load shots with discontinuous timecodes numbering (as discussed in Chapter 6) and require black frames be used to fill in the gaps in the timecode. Under these circumstances, a large number of black frames might have to be generated that ultimately serve no practical purpose (but consume an inordinate amount of storage space). There are several solutions to this problem, however.

Format-and/or resolution-independent conforming systems can take advantage of their capability to mix images, creating black frames as very low-resolution (even 1 pixel) images, or the systems can use a highly compressed format, greatly reducing disk space requirements (and in this instance, not compromising quality because black frames inherently have no spatial or chromatic detail at all). Another option is to create a single black frame, and use the symbolic linking feature of the operating system (if available) to reference this frame for every required black frame. Since the link files use substantially less space than uncompressed black images, the disk space requirements are reduced substantially.

7.6 Conforming Digital Media

Conforming digital material (most notably computer-generated visual effects), should be a simple matter. Unfortunately, it can be more of a complicated matter than conforming scanned film. Digital media doesn’t have a reference point other than a filename. It doesn’t have equivalent timecode like video (or timecode that can be assigned to it as with scanned film), and it doesn’t have a system such as the use of key numbers to make each frame unique. The problem is that the offline editor can cut a digital shot, such as a completed visual effect, into the final cut but have no convenient way of supplying the specifics of the edit to the conforming system. The edit will show up as an event in the EDL, but the timecodes accompanying it become largely meaningless and the process is certainly not automatable. Very often, each digital shot has to be manually inserted into the conforming system, which can be problematic for effects-laden productions, and it complicates the version-tracking process.

A viable solution can be borrowed from the way scanned film is conformed and involves assigning digital footage a reel number and timecode, just like all other footage in the pipeline. For this solution to work, a new tape (or several new tapes) is created and given a reel number. Next, the footage in question is output to this tape, and the timecode for each shot is noted. This tape can be given to the offline editor, who cuts it into the program and produces an EDL for each shot. Meanwhile, the footage on the conforming system is assigned the same timecode and reel number as the tape created. This way, the EDL will then correctly reference the data.

The specifics of the procedure aren’t important, just as long as there is an accurate way to refer to the same shots in the same way on both the offline and the conforming systems. At some point in the future, the reliance on EDLs to correctly conform footage may be replaced by some other, more robust method for synchronizing the footage on both systems. Perhaps each frame of data might be given a unique number, and the same data (albeit at lower resolution) might be copied to the offline-editing system for editing, eliminating the need to use video tapes at all. At present however, EDLs are a popular and reliable method for ensuring that data is conformed automatically.

7.6.1 Re-Editing

On some occasions, shots must be adjusted or re-edited even after the offline editing is complete. A member of the production team often sees something that requires a quick adjustment but doesn’t necessarily warrant issuing a new EDL. In addition, at times the conformed timeline, for whatever reason, doesn’t match the offline edit and must be adjusted manually. Most conforming systems provide basic tools for re-editing the footage.

images

Figure 7–20   Digital media can be conformed by recreating it in the offline edit and generating an EDL

The simplest editing method is to take a source (e.g., a scanned reel of film), mark an in-point (i.e., the starting point of the edited shot), an “out-point” (the finishing point of the shot), and then do the same on the program’s (the “record’s”) timeline, marking points for the footage’s location. It’s actually only necessary to mark three out of the four points, and the editing system automatically calculates the position of the fourth point based on the timing of the other points. Some editing systems automatically create a motion effect when the length of the source shot doesn’t match the length of the record shot, speeding or slowing it as needed to make it fit. In some systems, you can also set a sync point, whereby a single frame in the source footage can be matched to a single frame in the recorded program.

New shots can be added to a sequence, either by being spliced in to the timeline at some point or by overwriting existing shots. Splicing in a shot adds the new shot without removing any of the existing footage. Operations of this type are known as “ripple” edits, meaning that the overall length of the program is changed. Performing ripple edits can cause sync problems with audio, multiple timelines, or tracks. On the other hand, overwriting a shot replaces the original footage with the new shot, which doesn’t alter the program’s length at all.

Similarly, events can be removed. “Lifting” a shot removes it, leaving behind an empty space (without altering the program’s overall length). Extracting is another type of ripple edit; it removes the shot but doesn’t leave a gap, instead making the previous shot cut to the following shot. This type of edit reduces the sequence’s overall length. It’s also possible to extract or lift just a few frames from a sequence, rather than removing entire shots.

In many cases, it can be necessary to divide a single event into two or more events, particularly when an effect is required for only a part of the shot. In this instance, a cut can be performed and is usually made at the currently displayed frame. Alternatively, a join edit can merge two adjacent events into a single one. However, this method usually works only when the shot’s source footage is continuous (i.e., you can’t join together two different shots).

Individual shots can also be slipped. Slipped shots change the timing of the footage without changing the lengths of any of the events in the sequence (and without altering the sequence’s overall length), instead of just changing the required source frames. Transition points can be adjusted, too. Sliding a shot repositions it in the timeline so that it starts earlier or sooner. Adjacent shots increase or decrease in length to accommodate the process of sliding shots, and so the overall program length remains unchanged. Finally, trimming a shot makes it possible to increase or decrease the shot length. The adjacent shot is increased or decreased in length accordingly so that the overall length remains the same. A ripple trim alters the length of an event without affecting other shots, meaning that the program’s length changes.

Making most of these types of edits usually requires additional footage. This extra footage may be covered by handle frames in the case of small adjustments; otherwise, more material may have to be acquired.

Re-Editing Locked Pictures

Given that the editing of most features is considered complete (or “locked”) by the time it’s conformed in the digital intermediate pipeline, it’s remarkable how many last-minute changes seem to be necessary. In some ways, it’s understandable—certain subtleties in the timing and picture can’t be seen in the low-quality offline footage that editors work with. These subtleties become apparent only by watching the full-quality image on a big screen. But more than that, the production team tends to view the conformed pictures with a somewhat fresher perspective, and so an element of perfectionism sometimes comes into play. For the most part, re-editing a few frames here and there isn’t much of a problem, particularly when the changes are covered by the handle frames. The real issue is that no paper trail of edited corrections is usually made on the spot. If anything were to happen to the data in the conforming system, it usually is a trivial matter to reload the offline EDLs back into the system and conform everything again. But small tweaks that were made might be forgotten and therefore lost.

Trying to manually edit the offline EDLs to match changes that were made after the fact can also prove difficult or, at the least, tedious. Therefore, the most suitable solution is for the conforming system to output its own EDLs to reflect its contents at regular intervals. That way, if anything does go wrong, you’d still be able to revert to an up-to-date, accurate EDL.

7.6.2 Frame Patching

Many times, changes are made to individual frames outside of the conforming system, and such changes then must be included along with the rest of the conformed data. The most common reason for making these changes is “QC fixes.” Throughout the digital intermediate process, facilities run “quality control” checks on the data to be conformed.

These checks inevitably reveal problems that have to be fixed—the problems usually are in the form of film damage or render errors. Because fixes are needed only for individual frames, as opposed to entire reels, just the frames in question are separated from the rest of the conform and output to the restoration system. Normally all the required fixes are completed together, and the frames output back to the conforming system. Rather than destroy the original frames by replacing them with the fixed version, the fixed frames are instead incorporated into a new “virtual reel,” meaning that they’re assigned a new reel number and source timecode, which don’t actually exist in any physical form. The benefit of this method is that the original, unfixed frames still exist in the event that problems with the fixes are later revealed.

Within the conforming system, these fixed frames can be used to patch the original, conformed frames. In a vertical-editing environment, this process can be achieved by simply placing the fixed frames in the correct position on a new track within the timeline, above the previous versions. Alternatively, it may be possible to replace the original, unfixed frames with fixed ones in the timeline, without actually affecting the image data.

images

Figure 7–21   When using multiple tracks, it’s possible to patch specific frames with repaired or otherwise corrected frames, by loading the fixed frames individually onto a higher track

7.6.3 Reconforming

One of the biggest logistical headaches in the entire digital intermediate pipeline is the having to reconform material. Reconforming is necessary anytime there is a significant change to the cut of the program. Any changes that require the program’s soundtrack to be redone can be considered “significant.” Inevitably, new offline EDLs must be supplied, either in the form of a changes EDL (for verticalediting-capable conforming systems) where possible, or otherwise as completely new EDLs.

The reason reconforming becomes such a problem within the pipeline is because most systems and operations don’t directly receive information from the conforming system, but instead from the actual data. For example, the color-grading system might receive frames from the conforming system in the correct order, but it doesn’t “know” what each frame represents. It just “sees” a sequence of frames, which have previously been divided into individual shots, and each shot has separate color-grading settings applied. The crucial point is that whenever the cut changes, the color-correction parameters apply to the original cut, meaning that events may be in the wrong order, apply to the wrong shot, or more frequently, end on the wrong frame, creating “grading flashes” (i.e., a situation where the color shifts drastically in the middle of a shot). This applies to other systems as well. The most notorious issue concerns patched frames that are now out of sync with the new cut. These patches often have to be manually repositioned to match the changes to the conform, but sometimes it may be easier just to remove them and do the fixes again.

Similarly, when vertical editing is used, the track replaced by the new EDL (typically the lowest one) is now out of sync with all the other tracks, which have to be adjusted to match the changes.

Fortunately, some conforming systems include a reconform option, to manage such changes more intelligently. The reconform option works by examining every position in the timeline and assigning the reel number source timecode (from the original EDL) for that position to every frame (on all tracks). When the new EDL is input into the system, all the material already in the system is reassigned to the correct position, based upon the source timecodes. Any source material that doesn’t recur in the new EDL is assumed to have been removed from the program and is removed from the timeline. New material not already present is then conformed in the usual way.

Other systems also must be able to perform a similar procedure to correctly incorporate the new cut. Many color-correction systems, for example, have a function to input old and new EDLs, and then a similar process is run to put the color-correction modifications in the right place.

If all the systems in the pipeline are able to automatically reconform data as needed, then the potential problems are reduced drastically. Still, regardless of the effectiveness of the reconforming software, the newly cut program will have to be checked thoroughly for other problems, such as missing footage or bad edits.

Rescanning Film

Even though one of the proclaimed benefits of the digital intermediate system is that film has to be handled only once—scanned and then stored away forever—in practice, film often has to be scanned more than once. Most reels of film have to be scanned only once, but rescanning is essential in several instances.

If image data is damaged somehow, either through data corruption or an operation having gone wrong, it’s often quicker and more convenient to simply rescan the image rather than try to repair the problem. The other time a rescan is necessary is when insufficient footage is available. If a reconform or re-edit requires footage that wasn’t scanned in the first place, it’s necessary to rescan the entire shot to include the missing footage. However, this presents a problem if the footage was supplied as master source reels, because it may mean that the additional footage isn’t even available to be scanned. In this event, the remainder of the negative must be sent over separately, scanned, and then conformed in the correct place. At that point, the problem is that a difference between the original footage and the extended parts may be visible, because they were scanned under different conditions. These discrepancies can show up in the form of differences in color and image position, both of which can be an exacting and a time-consuming process to resolve.

These differences can also be visible with rescanned single shots. If the original shots have been patched, the patched frames may stand out from the rescanned footage. The most sensible solution in this case is to remove the patches from these shots and refix the footage.

Error Reporting

One of the benefits of computer-based systems over other types of systems is that they can be designed to include a level of “intelligence.” If you put a video in a player, or a reel of film on a projector, you can obtain a limited amount of information about that tape or reel of film. Load some digital images into a conforming system, however, and a wealth of information is potentially available. A conforming system can spout statistics on a multitude of factors, from the number of frames in the program, to the most frequently viewed image. This information need not be purely cosmetic however, and much of the information that conforming systems provide can save time and frustration.

Probably the most important thing the conforming system can tell you is which, if any, frames or shots are missing, without your having to play through the entire timeline to find out. A conforming system “knows” whether a shot is in a loaded EDL but the footage for it isn’t available for playback. Dupe detection is used to determine whether the same source frame occurs twice in a production—another possible indication of something having gone wrong (except when flashback sequences are used).

Even more advanced options are possible. A conforming system can use motion-analysis techniques to determine, for example, whether any flash frames (i.e., single frames that appear at the end of a cut) are included or to look for tell-tale signs of render errors or video dropout.

images

Figure 7–22   When significant changes are made to The Conform EDL, reconforming must be performed to preserve such settings, as those generated during grading

Finally, it may be possible to use the same checksum files generated by the data management system to ensure that all the conformed frames are intact.

7.6.4 Offline Reference

One benefit of the video-based offline system (offline systems usually are video-based) is that it can quickly play out footage to video tape. Therefore, every offline EDL can be accompanied by an offline reference video. These videos can be acquired into the digital intermediate system and used as a reference for confidence checking.

Once all the data is properly conformed, the offline playout can be conformed onto a separate timeline or track (if available) and viewed in parallel with the conformed data. This method is one of the most effective ways to spot conforming problems, and some conforming systems even automate this process of confidence checking by comparing the picture contents of the conformed data with the equivalent offline playout frame, and marking any that appear different, indicating it should be checked. Other conforming systems have provisions for syncing the playback of a video tape (such as the offline playout) to the playback of the conform, so it may not be necessary to capture the tape into the system (thus saving time and disk space).

In addition, some conforming systems allow a digital audio track to be synchronized with the conformed picture, which can act as yet another test confirming that the footage is properly conformed.

Eye Matching

When all else fails, and EDLs don’t conform frames in the correct place, shots can always be eye matched, which simply means positioning the shot until it “looks right.” This eye-matched shot can be cross-referenced against an offline reference, or alternatively, the offline editor can supervise or approve this procedure. Eye matching can be a surprisingly accurate way of conforming footage, especially for shots containing a lot of movement, although it’s a much slower process than using an EDL, especially for a large number of shots.

7.6.5 Back Tracking

As well as working out where a particular shot should go, a conforming system should also be able to work out where a shot came from. Very often, the only way of doing so is by looking at the reel number and source timecode of the footage in question. However, as the data goes through multiple incarnations and processes, it’s possible that this information can be lost (especially in conforming systems that don’t use a vertical-editing paradigm), in which case, it becomes a data management issue. Back-tracking data is often necessary, particularly for sequences with problems and that have to be reacquired.

7.6.6 Consolidation

The process of consolidation involves systematically replacing each frame in the conformed sequence with a new, single-layered (or flattened), unreferenced sequence, with all effects permanently rendered and all references to EDLs removed. After the consolidation, a string of individual frames are assembled. To all intents and purposes, the consolidated sequence looks exactly the same when played back as it did prior to consolidation. All the image data is normally moved to a single location, and all edits and effects are now committed and can’t be undone. Consolidation is usually performed as a final stage prior to output, but it’s sometimes applied to selected sequences to commit effects and prevent their being altered (e.g., for sequences that have been approved and signed off).

7.7 Summary

In the digital intermediate environment, the conforming process is often a necessary step for marrying an offline-editing process that was done elsewhere, re-creating the production frame for frame, to the full-quality images. You can do this a number of ways, but the most common way is to use an EDL or cut list supplied by the editor, which enables the digital images to be matched to the footage the editors were working with, which typically includes the use of transition and motion effects.

A few things can go wrong during this process, because it isn’t completely accurate, but most conforming systems offer the capability of re-editing conformed material to match the reference edit or to enable last-minute changes to be made. If significant changes are made to the edit, it may have to be reconformed to retain any work already started on the previous edit. With many systems, this process is automated, analyzing the original and revised EDLs.

Once material is conformed, it can be viewed to provide a sense of context, or any number of processes can be applied to it, either from within the conforming system or by being sent to other dedicated systems.

The following chapters look at the various creative operations that can be applied to the conformed images before they’re output, starting with one of the most important: color grading.

1 In actual fact, long-form productions are often divided into reels of approximately 20 minutes each; however, for the purposes of the offline edit, each output reel is treated as a separate program

2 With the exception of slide film and Polaroid film.

3 The same video is sometimes used to watch the dailies of the developed film, when the production team doesn’t require the dailies to be viewed on film

4 The alternative is to do a 3–2 pulldown, where the first frame of film equates to hree fields, the second frame to two fields, and so on.

5 Working with PAL video systems makes this process easier, because the video format runs at 25fps, which is only a 4% difference. To compensate, the sound is simply run slightly faster during editing

6 A “soft cut” is one where a very short dissolve, usually a single frame, is used in place of a cut. In theory soft cuts are more pleasing to the eye than regular cuts, but in practice, they’re rarely used.

7 A command-line computer program is one where a set of instructions is input directly into the command window of the operating system and doesn’t necessarily provide any visual feedback.

8 In fact, the conforming system is often a subset of the color-grading system, rather than the other way around.

9 In fact, digital files are normally stored internally as frame numbers, which are independent of a frame rate, but the frame rate has to be supplied to derive the correct timecode from the frame numbers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset