Chapter Seven
Editing Terms, Topics, and Techniques

 

Timecode

Montage

Parallel Editing

Multi-Camera Editing

Composite Editing

Rendering

Chromakey

Video Resolution

Sound Editing

Color Correction/Color Grading

Importing Still Images

Digital Workflow

Technology vs. Creativity

These days, much is expected of a video editor. Depending on the job size, budget, and purpose, many editors are tasked with performing more than just straight cuts. The increased sophistication and capabilities of the editing software have, in some ways, coerced the editor into expanding her or his skill set. Graphics creation, motion effects, music selection, and audio mixing may all be part of the modern editor’s responsibilities.

Having covered many topics regarding shot type, shot quality, transition types, and edit categories, you are in a good place to go out and start editing. No one source of information will be able to tell you everything about the art, craft, and job requirements of an editor, but, by now, you should have a solid grasp of basic approaches to thinking about how to handle general editorial decisions.

We address a lot of the what, where, and why questions. Information on how to edit can be found in many other sources. Each video-editing software application has its own unique “engines,” media workflow, tool names and functions, etc. There are specific classes you may take, online training and tutorials to work through, many books to read with practice media, etc. in order to learn how to edit in those software titles. As this is an introductory book on the concepts and general practices behind editorial decisions, such precise technical information is well beyond our scope.

In this chapter, we will augment our list of topics by addressing some additional terms, concepts, and techniques. Learning from a book is an excellent way to start wrapping your brain around the editing process, but there is no replacement for on-the-job training. The fun and satisfaction are to be found in the editing process itself, so get ready to go to work.

Additional Editing Terms

Timecode

A film frame is the unique space of a single recorded image along perforations or between sprocket holes (depending on the gauge or size of the film strip). Emulsion film, being long strips of thin plastic, uses feet and frames to count lengths and/or durations and therefore time. Even though emulsion film acquisition and exhibition has decreased dramatically in recent years, we still refer to our video recordings as “footage.” Videotapes, having no perforations, use a special track on the unbroken length of tape inside the plastic cassette that counts time according to an electronic pulse that notes the hours, minutes, seconds, and frames (appearing as HR:MN:SC:FR, where the example of the one hour and 37 second mark on a tape would read as 01:00:37:00). Digital media files have similar meta-data embedded inside the computer file that keeps track of a great deal of information including the hours, minutes, seconds, and frames, but does so digitally (Figure 7.1). This notation is called timecode.

01:00:37:00

Hours: Minutes: Seconds: Frames

or

1 Hour and 37 Seconds

 

FIGURE 7.1 The counting scheme of video is known as timecode. Digital video source files, videotapes, and edited sequences can each have unique time durations, but they share the same rate of time progression of hours, minutes, seconds, and frames.

The frame rate for a particular format is also involved in this counting scheme. Emulsion film recording and projection has a standardized frame rate of 24 frames per second (24fps). Old-school, twentieth-century standard-definition NTSC video in North America has a frame rate of roughly 30fps (29.97) while European PAL video has a 25fps rate. So for archival PAL projects on your editing software, you would be able to watch 25 separate frames in one second, and for NTSC SD projects, you would be able to see 30. Of course, these frames would go by your eye so quickly in one second that you really do not get to distinguish one frame from another. You would have to step through one frame at a time to see the 30 individual frames of that second of video.

Standard-definition NTSC and PAL broadcasting is all but gone, and with the advent of increasingly capable digital video cameras, additional frame rates are available for video production (and playback). The old interlaced fields of SD video frames were briefly used in high definition as well with 1080i (or 30fps, 1080-line, interlaced) video. Now, most cameras generate a progressively scanned stream of digital images, meaning that for each frame there is just one full image (just like emulsion film cameras have done for over 100 years). Without getting too technical, you may soon see more commonly used progressively (p) scanned image frame rates such as 24p, 25p, and 30p being phased out as newer high frame rates (HFR), such as 48p and 72p, are making headway in mainstream filmmaking. This means that editors will, increasingly, need to pay extra attention to how their video source material was recorded time-wise.

Timecode (TC) is the counting scheme, or the clock, that video-editing software uses to keep time for frame rate playback and for keeping sync. The picture information and the audio information (when they come from the same associated media files or tape source) will have the same matching timecode frame for frame. Depending on the recording camera and the medium (hard drive, memory card, or videotape), the timecode data may be set in different ways. Digital video recorders that generate media files typically begin each unique file with 00:00:00:00 and count up to its individual duration.

Videotapes typically have a continuous timecode across the entire tape (either pre-striped or laid down as the camera records each frame). Unique shots start and stop along the stream of continuous time. As such, you may encounter tape sources that have unique tape names or numbering schemes (the first physical tape has 01 hour TC and the second physical tape has 02 hour TC, or the first hour of shooting was assigned 01 hour TC by the production team and the second hour has 02 hour TC, etc).

Video-editing software keeps track of all TC references and shows this data in the bins and folders, in the playback windows, and in your sequence timeline. The project settings often dictate the counting scheme for your timecode on edited material and it is often best if they match frame rates, although most software can now mix and play back different frame rates in the same timeline; if not, they can certainly convert from one frame rate to another.

Montage

The term “montage” has several meanings when used in relation to motion picture editing. For the French, the word simply describes the act of assembling the film, which is the job of the editor. For many involved in Soviet silent cinema of the 1920s, it emerged as the montage theory of editing, which is a belief that two unrelated images can be edited together to generate a new thought, idea, or emotion in the mind of the viewer. An example of this concept edit was presented earlier in this book. A young couple announcing their wedding engagement in Shot A is then followed by an image of a prisoner with a ball and chain around his ankle in Shot B. A viewer might get the idea that the filmmakers are equating marriage with a prison term.

The more widely applied meaning of montage today refers to the montage sequence. This involves a series of quick clips, usually accompanied by music, that show a condensed version of actions, activities, or events over time. In a teen comedy, it could be a series of shots showing the friends getting ready for the prom; in an action movie, it could be a series of shots showing the elite fighting team going through tough training; in a romance, it could be a series of quick clips showing a young couple going out on multiple dates and falling more and more in love with one another. A montage sequence serves a very useful purpose by condensing important plot points and developments that might otherwise unfold across a day, several weeks, or even years, into a shorter, more manageable duration. The audience do not have to watch every aspect of these events to understand their results. Think of it like a highlight reel of important plot events that, if shown in their entirety, would take up way too much screen time.

fig7_2.jpg

FIGURE 7.2 Montage sequences typically show a series of quick shots that are related to a larger scene, segment, or topic of a show or movie that condense time and provide a lot of visual information. They are often accompanied by fun and fast music.

Parallel Editing

Used primarily in fictional narrative filmmaking, parallel editing (also known as cross-cutting) calls for a special construction where two plot lines of the story’s action are intercut with one another. In other words, a portion of one plot line is shown, then the sequence shifts over to showing the other plot line which, in the film world, is supposed to be happening simultaneously.

This technique proves especially effective during an action sequence – often a race against time. The pace of the sequence may also get more “frantic” as the two storylines unfold, building the suspense and getting closer to the dramatic climax. This can be achieved by making the shots in the sequence increasingly shorter and shorter. The frenetic energy of the cuts carries over to the audience, who are feeling the urgency of this pacing and the race against time.

Multi-Camera Editing

Most fictional narrative filmmaking is accomplished with just one camera. The shot types described earlier in this book are typically composed, lit, and blocked for the one camera that is used to record the coverage for that scene. It can take a significant time to accomplish all of the shots needed to edit the scene for the movie. There is another practice, however, where multiple cameras are used on set to shoot different angles of the same action, getting differing shots of coverage while the actors perform the action one time on one take. Camera 1 records one character’s close-up and Camera 2 records the other character’s close-up at the same time. Provided both performances are good, the production saves time and money shooting coverage in this manner. Using multiple cameras is very popular when recording studio-based situation comedies for television, soap operas, reality programming, news, talk shows, sporting and competition programming, live theater, musical concerts and, with certain directors, very “performance”-heavy scenes or stunt work in fictional narratives.

fig7_3.jpg

FIGURE 7.3 We cross-cut back and forth between the actions of two separate characters as they progress toward a common goal. This is an example of a type of parallel editing done within the same scene. (Photo credits: Anthony Martel)

The beauty of multi-camera editing is that all of your source footage for each take from each camera matches. A cut at any point will have a corresponding matching frame from another camera angle to cut to in perfect sync. Certainly, all action should be seen from all coverage angles, but also the cameras may share or identical timecode, so at 00:24:17:00 (24 min. 17 sec.) a cut from one camera will have the next frame of image and time match for all cameras recording the event. Most professional video-editing software has a built-in process for matching up all of your camera source footage. As a result, you have the option of cutting to any camera angle at any point in time, much the same as a television studio director in the control room has the option of switching from Camera 1 to Camera 3 to Camera 2, etc. (Figure 7.4). Because the audio was also recorded as one unbroken track during the performance, all of the camera images should match up perfectly while playing over the one sync source of audio.

fig7_4.jpg

FIGURE 7.4 Recording scenes with multiple cameras is common in TV studio production, but is also becoming more prevalent in location shooting of other motion media genres. WS and CU shot coverage is completed during a single performance.

Composite Editing

Think of composite editing as multi-layer editing where more than one image is seen on screen at one time. This will necessitate the use of more than one video track in your timeline and the use of visual effects or filters on the clips on the upper tracks. Your video project, or sequence, will have a set frame size based on the image resolution of the video format that you are editing (see later in this chapter for more details). Most clips will have the same resolution or size, and if you stack one on top of the other, you will only see the top video track because it will block your view of anything underneath it. A visual effect or filter, such as a superimposition or a picture-in-picture (PIP), will be needed to treat the upper track so that the lower track will at least be partially visible.

Split-screen effects are created using this method. You could make two separate shots of people talking over the telephone fit into one screen by compositing the two clips on Video Track 1 (V1) and Video Track 2 (V2) of your timeline and applying some crop and reposition effects to each. A PIP inset is done in a similar fashion, whereby you reduce and reposition the video clip on V2 (Figure 7.5). This technique is used frequently on TV news broadcasts.

fig7_.jpg

FIGURE 7.5 Composite edits are made out of multiple video layers. These examples show a split-screen phone conversation and a picture-in-picture over a background video track.

Each video track in use counts as another data stream that needs to be processed. The more compositing of video tracks you do in your sequence, the more difficulty the computer system will have playing back the multiple media files; this is especially true for high-definition media. Most straightforward fictional narrative motion pictures will not need very much composite editing, but any projects that could benefit from multiple images or composited elements will need this approach. Music videos, sci-fi or fantasy films, commercials, and even corporate promotional and wedding videos may call on split-screen effects, PIPs, or composited visual effects (VFX). A render (see next section) may be required in order for the more complex stack of video clips to play back in sync on your system. Otherwise, you may experience sluggish, stuttering playback or dropped frames.

Rendering

Regardless of the video-editing software that you use, some video or audio elements in the timeline will eventually need to be rendered during an editing session. Rendering is the creation of new, simplified media files that are based on the merged result of numerous affected clip references in your timeline. Rendering will allow the system to play back complex effects or composites more easily. Typically, if you have created a complicated visual effects composite of six HD media streams (V1 up to V6), the system may have difficulty playing it in real time. The software is trying to heavily manipulate each pixel of each frame of each layer in the composite all at the same time. If you render the affected clips, the system creates a brand new single “merged” media file that shows the result of all effects in the composite.

These new and independent rendered media files do get referenced during playback of your sequence, but they have not deleted or replaced the original clip references that your system was struggling to play back. Those original clips still live in your bins. Be aware that rendered clips are, in essence, “physical” data and they do fill up space on your media drives. Also, if you make a change to any content in a rendered clip or effect composite, it “unrenders” or reverts back to the original complex media file references. Rendering times vary depending on the power of your computer system’s processors and amount of RAM, the complexity of manipulation by the effects, and the complexity of the original source media. Don’t be surprised if it takes a while.

Chromakey

When you watch a meteorologist deliver the weather report in front of a large radar image showing swirling patterns of cloud movement, you are seeing the result of a chromakey. You may be more familiar with the terms “green screen” or “blue screen.” These names refer to the same process, whereby a certain color (chroma) is “keyed out” or made invisible and removed from a video image. Post-production software is used to select that particular color (most often green or blue) and turn it invisible while the remaining pixel data in the image is untouched. This layer of “keyed” video becomes the foreground element (the weather person clip on V2) or the top layer in a composite. Then, some other video image (clouds on the radar clip on V1) is placed on the layer below to become the visible background of the new composited video image.

Although you could key out any color in your video clips, the colors green and blue are most often used because they are two colors whose range of hues are not normally present in the skin or hair of human beings. Be advised that people with green or blue eyes will have their irises disappear if a similar green or blue color of chroma-screen was used during production. Another type of key, known as a luma-key, uses a low black video voltage (sometimes called “super black”) to make this transparency. You may find this used on graphic elements where the super-black pixels can be turned invisible by your editing software while “normal” video black pixels remain unaffected. You could also create these still graphics with the alpha channel instead of creating super-black luma-keys.

fig7_6.jpg

FIGURE 7.6 Most video-editing software applications have some form of chromakey effect. Here, a green sheet was keyed out to create the composite of the puppet over the old photograph background layer.

Video Resolution

A video image is made up of a grid of tiny boxes each filled with particular brightness and color data. These tiny boxes are called pixels, which is shorthand for “picture elements.” The more pixels of data you have in your image, the higher the resolution it will have. A high resolution means a greater ability to show more precise detail. High-definition (HD) video, in today’s technology market, is a moderately high-resolution capture and playback format for digital motion imaging on television and the web. 4K and 8K video are growing these numbers exponentially.

Video resolution is customarily represented by two numbers (#A x #B). The first number describes the quantity of pixel units arranged horizontally in a single row across the screen’s grid, left to right. The second number indicates the quantity of these rows stacked up from the bottom of the screen to the top (i.e., pixels in a vertical column). Full HDTV is represented by the resolution 1920 3 1080. This means that there are 1920 tiny pixel boxes arranged horizontally from the far left of the image to the far right of the image in a single row. Then there are 1080 rows stacked up from the bottom of the frame to the top. Simple math tells us that a total of 2,073,600 pixels are used to make up the entire image.

In comparison, old-school standard-definition digital video has an image resolution of 720 3 480 (North American NTSC-DV) and 720 x 576 (European PAL-DV), with an overall pixel count that is just a small fraction of that for HDTV. As a quick technical note, SD video uses rectangular, or non-square, pixels, while HD, graphics-editing software, and computer monitors all use square pixels for the creation and display of images.

Today, ultra-high-definition television uses a screen resolution of 3840 3 2160 (approximately four times the amount of image data as full HD). Soon, 8K UHDTVs will display even more information. When will more ever be enough?

More pixels may mean more detail, but it also means more data per frame to be analyzed, processed, and displayed by your video-editing computer system. Even a robust system can handle only so much information in the processing pipeline (hard drives, processor, RAM, graphics card), so editing with full-resolution video can slow things down. Most editing software allows you the option to convert your original files to a more compressed version. The pixel count will remain the same, but the amount of data represented in each pixel “region” is filtered and averaged down so that the system does not have to think as hard. For playback as streaming web videos, the compression may also involve converting to a lower frame resolution, so that the final video media file has a smaller overall size in megabytes. An accompanying loss of visual quality may also be detected in the smaller video file.

TABLE 7.1 The Aspect Ratio and Pixel Resolution of Video Formats

table

Additional Editing Topics

Sound Editing

Earlier, in Chapter Five, we discussed the teakettle whistle as a motivator for the edit. Beyond being a source of motivation, the audio track is a powerful component within the storytelling architecture. It can help to underscore the surface reality of the scene. Consider the following example from a headache relief medicine commercial. A woman is seen sitting in a noisy business office in a wide shot. There is a cut to her in an MCU. The audience would expect the noisy business office ambience to continue to be heard under the new MCU of the woman. If they are not presented with that continuity of audio, they could be pulled out of the viewing experience, wondering what happened to the sounds. The cut would draw attention to itself in a negative way.

What if the same scenario occurs, except that this time there is no office noise after the cut, only dreamy, ethereal music is heard? This new audio information seems to match the calm look on the woman’s face as the medicine takes effect. This peaceful music is her internal soundtrack: it is representational. The audience are given new information about the internal mental or emotional state of this woman. Within all of the office craziness, the medicine is now helping her to stay calm and collected in a near-meditative state. The editor has used sound to draw positive attention to the transition through providing the audience, and the woman, with a break from the office noises (Figure 7.7).

fig7_7.jpg

FIGURE 7.7 The noisy office sounds that caused the woman’s headache drop away at the cut to the MCU and mix into ethereal music. This changeover on the audio track reflects her inner peace after she’s taken the medicine.

Sounds can also make statements that go against the visuals being presented to the viewer. Consider the following example. You have an interior medium two-shot of a man telling his friend that he is “going to hunt for a job.” The roar of a lion is heard on the audio track and a picture transition takes us to a wide shot of a crowded, bustling city street during commuter rush hour. Animal noises are mixed in the audio tracks along with the busy street ambience. The character from the previous shot, the job hunter, is now out in the wild on the hunt for a new job. If detected, the audience might normally wonder about the animal sounds playing while watching a busy city street, but because it follows the context of the job hunt, the otherwise out-of-context animal sounds actually become story-enhancing sounds (Figure 7.8). They underscore the metaphorical big-city “jungle” theme being explored in the story.

fig7_8.jpg

FIGURE 7.8 The lion’s roar bridges Shot A to Shot B, which continues the animal sound metaphor. The third image shows what the clips of this sound bridge might look like in your sequence timeline.

As with our train whistle example in Chapter Five, the lion’s roar from the sound metaphor above presents another sound bridge. The roar takes us from one shot into another. In these examples, the sound of the next shot is heard before the picture of the next shot. We call this “sound leading picture.” The opposite holds true as well. You may have the sound of Shot A carry on under the newly visible picture of Shot B. We call this picture leading sound. Perhaps you have a wide shot of a man dropping a bowling ball on his foot. As he yelps in surprise, you cut to an extreme long shot of treetops in the forest. As the sound of his yelp continues under the new picture of treetops, flocks of birds fly up and away as if startled into flight by the man’s cry carried so far across the countryside (Figure 7.9).

The editing practice of having either picture or sound start early or end late is known as creating a split edit, an L-cut, a J-cut, or lapping. Picture and sound tracks are really separate media streams when they live inside your editing software. In most scenarios, they will play in sync and be placed in the timeline together. It is easy to see how you then might end and begin both picture and sound track(s) for two shots at the same moment in time. This is called a straight cut or a butt-cut.

fig7_9.jpg

FIGURE 7.9 The yelp of the man bridges across the edit and laps under the split picture track.

Assembly edits, and maybe the rough cuts of your sequence, will most likely be all straight cuts. As soon as you start to finesse the edits in the fine cut, you may find that offsetting the cut point for the picture or the sound is advantageous, or sometimes necessary, especially during dialogue editing. You are making split edits for creative purposes. One track leads or follows the other (Figure 7.10). When done correctly, these split edits can make your transitions either very engaging or very smooth for the audience. When done incorrectly, they can put the brakes on pretty quickly.

In dialogue scenes, a J-cut will also provide aural motivation for the picture track to cut to the person speaking. Many believe that this practice emulates how we hear and see in our real lives. Picture yourself having a conversation with a friend in a cafeteria. You suddenly hear the voice of another friend who is coming to join you. You turn your head to see your other friend as she approaches the table where you are sitting. In this scenario, you hear your friend’s voice (she is “off screen”) before you see her approaching (the “cut” to her MLS). Record these actions with a camera and edit them, with the J-cut audio tracks, just like it happens in real life.

fig7_10.jpg

FIGURE 7.10 An example of a butt-cut becoming a split-edit L-cut.

Color Correction/Color Grading

A key component to the finishing phase of post-production, the color correction process allows for an editor (or a special technician known as a colorist) to tweak the brightness, contrast, and color of the final clips in the video sequence. The term “color correction” is usually applied to making major changes in the exposure range, contrast levels, and/or colors of an image that appear wrong on the screen or inappropriate for the story being shown (brightening an underexposed clip, removing too much blue near the windows on an interior shoot, or removing the green on a person’s face from the bounce light in a midday, grassy exterior shoot). A color-neutral gray scale, good contrast, and scene-appropriate hues are the main goals. The term “color grading” (or “color timing”) is also used during this phase, and refers to making creative alterations to the contrast and color values so that shots match or to manipulating the imagery so that a special “look” can be achieved (dark and moody; desaturated color palette; bright and super-saturated, cartoon-like colors, etc.).

Whether it is spot correction or overall project grading, the main goal of this process is to get the video to have the desired look. Both technical and aesthetic reasons can drive a post-production team to make these choices. It is much more complicated than this, but video cameras basically capture and record light information as electronic voltages based on the quantity of light energy obtained from the different wavelengths of the visible spectrum. If the film set is dark, then there isn’t much energy (voltage or signal) present to record the scene and it may appear dark with weak color data. If the film set is very, very bright (there is a lot of energy and therefore voltage), the video image may appear too bright and washed out.

As an editor, you would most often hope that the production team have recorded images that represent the full spectrum of shadows, mid-tones, and highlights, and have adequate color saturation. When you have an appropriate amount of energy to manipulate, the color correction tools in your video-editing software can do more with the signal and yield the desired look of the project. When the original media file is either too dark or too bright, there is often not much that can be done to really improve the overall look of the image because the data that the correction tools need to manipulate was never captured in the encoded video file in the first place. Some very high-end video cameras can output sensor data at the equivalent of “camera RAW” (uncompressed image format) and software will have a greater chance of manipulating the imagery’s tonal range and color data.

There are two main parts to the video signal: luminance and chrominance. The luminance refers to the light levels or the overall brightness and contrast of an image. Gauged along the gray scale, an image with good contrast should have areas that are representative of dark shadows, mid-tones of gray, and bright highlights. A high-contrast image will have both very strong dark areas and bright areas but will have very few mid-tone gray elements between. A low-contrast image will have the reverse: mostly mid-tone gray with very few truly black or truly white areas in the frame. During color correction, it is traditional to first establish the black level of the video signal (the “set-up”), then the white level (the “gain”), and finally the gray mid-tones (the “gamma”). With your contrast levels set, you would then move on to making color (or hue) adjustments.

Keeping it simple, chrominance refers to color values, which are manipulated by different tools in the software. They allow you control over actual color or hue values from the color spectrum. They also allow you control over the voltage of a color, or its saturation level. Desaturating a video image (removing all color voltages) turns it black and white. Saturating an image (driving up the voltages or boosting the color “energy” signals) amplifies the color content and can make colors look unnaturally “thick” or “deep” like a candy apple red. The voltages, for both luminance and chrominance values, should not be over-driven or under-driven or they can compromise the quality of the video playback on electronic displays such as television and computer screens. There are devices called video scopes that help you to measure and analyze the video signal and to keep it within the “legal” zone.

Color correction provides you the opportunity to make bad video look better and to creatively “paint” a contrast and color “look” for your overall video project. You can work your way through your sequence doing shot-for-shot and scene-for-scene correction and grading. The flesh tones of your subjects should look appropriate for who or what they are, the color palette should look as you wish, and the shadows and bright areas should be where you want them. These changes are best achieved through the use of properly calibrated color monitors and digital, HD, or UHD luma and chroma scopes. These extra devices are usually very expensive, so if you are working on your own projects, you may have to make do with the built-in tools available in your video-editing or color-correcting software. If your edited program is ever going to air on television, then you would absolutely want to color correct your sequence, but any video project, even just for web distribution, can benefit from the tweaks of the color correction phase of post-production.

Importing Still Images

Common still photographic digital cameras are capable of recording very highresolution still imagery. As an example, you may have a still photographic digital file that is approximately 3500 x 2300 pixels. Even if you are editing in a full HD 1920 x 1080 video project, the video resolution (and frame size) is much smaller than the still photo you want to add to your sequence. The still photo file, pixel for pixel, is larger than the active picture area of the full HD video and would have to be scaled down to fit. What if you have downloaded an image file from a website (like a JPEG) and you intend to add that file to your timeline? The web image is only 200 x 200 pixels – far smaller than the HD video frame. Scaling the tiny image up to fit closer to the HD picture frame will make it look terrible. Each video-editing application handles the import and still photo conversion process differently.

As a video editor, you may be asked to add such still photographic images to your project. You can use a photo-editing application to make cropped copies of the original photos so they take on the pixel dimension and frame size of your video project (i.e., a 1920 x 1080 cropped photo will import and fit into your full HD video sequence). Depending on your video-diting software, the imported file may come into your project as a reference to your full-resolution original photo and you can use a filter/effect to scale and reposition the larger photo so that it fits into your video frame area. Other applications can be set to import converted versions of the original file and force it to resize to fit within the video frame, either on the horizontal or vertical axis, depending on the dimensions of the original photo file. Very small photos (like 200 x 200) will not scale well (because expanding pixel data degrades image quality), and if you need to incorporate them, you should know that they will remain very small or look very bad.

Additionally, a still photo is only one frame, but it does not play as only one frame in the video sequence. That would be a meager blip of 1/30 of a second or so. The import process may create new media of a longer duration where that one still frame is replicated to play over and over and over again as multiple instances of the same frame. So, if you need three seconds of that imported still image in your sequence, you can edit a clip segment of three seconds into your sequence.

Most higher-end photo manipulation software will allow you to create multi-layered still images or masked (cut-out) images with the alpha channel. Much as video has the red, green, and blue color channels (RGB), still photographs may exist in that color space as well. The fourth channel, alpha, is like a transparency switch that makes all color data for that specific pixel turn off or go invisible. The chromakey that we explored earlier is like this. Be aware that a true alpha channel can only be saved in certain kinds of image files – .TIFF being one of the most universally popular. Often, the video-editing application needs to be made aware that the alpha channel is present so that when a masked image is imported, the background (or masked region with alpha) remains invisible, and the video elements on the track below in the sequence will be seen.

Digital Workflow

The power, flexibility, and relative convenience that editing software has brought to the world of visual media creation are undeniable. It is not all fun and games, however. There is a responsibility, often placed upon the editor’s shoulders, to be organized and knowledgeable about file types, media assets, and the interoperability of various software applications. This creation, storage, and sharing of video and audio materials is generally referred to as workflow. It is increasingly rare for post-production workflows to incorporate analog source materials (like physical strips of motion picture film prints or analog videotape). Most independent software users and post-production facilities are now deep into the digital workflow, where all video and audio elements needed for a story are created, stored, used, and authored as digital files on computers.

Today’s multiplicity of digital video cameras use many different encoders and file types to create their amazing high-resolution imagery. Not every version of digital video-editing software uses the same types of files and not everyone encodes, stores, or accesses these files in the same way. A modern editor, in addition to being a great storyteller, must also be knowledgeable about these file types, feel comfortable with general computer functions (on multiple operating systems), and understand media drives, cloud networking, etc.

A wise film production team will have plotted out and verified the file-type workflow through the post-production process before they record their first take. The user-assigned details can also make a huge difference in how smooth the workflow becomes. Naming conventions differ from place to place, but logic dictates that developing a sensible and concise method for naming projects, bins, sequences, clips, folders, graphics, source tapes, etc., is essential. With clarity in your naming and organization of your raw, digital materials, you are already well on your way to a more efficient digital workflow for the life of your editing project.

Technology vs. Creativity

Film editing has been around for over 100 years and videotape editing for about 50. Computer-aided or digital non-linear editing has its origins around 1990. So whether it was scissors and glue, videotape decks, or computers, the one thing that all of these techniques/tools have in common is that they are all a means of assembling a story of some kind – a story that needs to be experienced by an audience that will be informed, manipulated, and entertained by the content and style of the assembled motion media piece. The editors, over the years, have been the skilled craftspeople, technicians, and storytellers using these devices and techniques – to show those stories to those audiences. The tools that they have used have simply been a means to an end.

Today, in the digital age, the tools are different, but the skills of those who use them remain the same, or at least they should. Unfortunately, many people get lost in the nuances of the latest editing application and they separate the importance of their storytelling abilities from their knowledge of the software’s functionality. Or worse, they get so caught up in learning the latest bell or whistle in a specific editing application that they forget their primary goal as storytellers. No one should confuse button-clicking abilities with solid editing skills.

There is a wide variety of video-editing software available on the market today. Several applications are of professional quality and are used by high-end, post-production facilities, television networks, movie studios, and the like. Many are geared for more in-home/consumer use and have fewer capabilities. Knowing how to use several of these applications will benefit the newbie editor. Knowing one is a great start, but knowing more about using several editing applications from the high end will expand your job prospects considerably. The important thing to remember is that no matter what tool you end up using to perform the actual edit, at that point in the filmmaking process, you are in control and you are the creative force behind the crafting of the story.

Chapter Seven – Final Thoughts: Old Techniques Done with New Technologies

Montage, parallel editing, and even composite effects shots have been around for over a century; the new tools that make these things happen and the ease of use of digital technologies have allowed so many different techniques to be used in so many different types of motion media pieces. Live action movies, TV shows, and web videos are all “time-based” motion pictures – as are animated cartoons. Even though we still have progressive (digital) frames, the use of timecode helps us to keep track of durations and sync in the construction of these time-based media pieces. Editors need to be knowledgeable on the oldest of motion picture editing techniques while always staying on top of the latest technological advancements and workflow that will help to show their stories tomorrow.

Chapter Seven – Review

1. Timecode from original sources allows you to maintain sync between your picture and sound tracks within your edited sequence.

2. A montage sequence is a series of quick clips, usually accompanied by music, that shows a condensed version of story-related actions and events that would normally happen over a longer period of story time.

3. Parallel editing cuts two simultaneous storylines together so that concurrent action can be seen by the viewer at one time during the program. This is usually done for action sequences.

4. Multi-camera editing allows you to edit footage of the same event captured by several cameras all at the same time. It is useful for sports events, rock concerts, soap operas, dance sequences, and staged situation comedy television programs.

5. Composite editing refers to creating a stack of video sources in a sequence where the application of filters or effects allows you to see portions of all of the clips at once in one frame.

6. Depending on your video-editing software and hardware performance, you may need to render more complex HD video clips or composited visual effects segments in your timeline. The software generates new, easier-to-play, single-stream media files.

7. Chromakey effects remove a particular color value or shades of a hue (most often green or blue) from a video image and allow the transparency of that color region to show video elements on the lower video tracks of the timeline.

8. All video formats have a screen dimension referenced by its width in pixel units by its height in lines or rows of pixels. For instance, full HDTV has a 1920 x 1080 video resolution, but UHDTV has a 3840 x 2160 resolution.

9. Split edits, L-cuts, and J-cuts change the start time of the picture track or sync sound tracks for a clip in the timeline. If picture leads sound, then the audience see a cut to the next video clip while still listening to the audio from the previous outgoing clip. If sound leads picture (often referred to as a sound bridge), then the audience hear the sounds of the next incoming shot underneath the video that they are still watching in the current clip.

10. Sound, whether matching the visual elements or contrary to them, is a great tool to enhance meaning in the story and to engage your audience on a different sensory level.

11. The goal of the color correction process or color-grading phase of post-production is to allow the editor to make all shots in the sequence look as they should for the needs of the program. Basic contrast and color balancing helps the images to look better, both technically and aesthetically. Special color treatments or “looks” may be added (e.g., a cold environment can be given a steely blue/gray color treatment; a hot desert may be made to look extra “warm” orange-amber).

12. Digital still images may be imported into your video-editing project, but be aware of the dimensions of the image versus the dimensions of your video project’s format. Images from the web may be smaller, and images from digital still cameras may be larger, but few will exactly match the frame size of your video. Cropping, resizing, and repositioning may all be required.

13. Become familiar and comfortable with the workflow needed for managing digital media assets and the creation of computer-based video edits. Thinking ahead and knowing your end result can save you some big headaches.

14. As an editor, do not let the complexity of video-editing software get in the way of your good decision-making abilities and solid storytelling skills.

Chapter Seven – Exercises

1. Create a montage sequence. Record video of a school or family event or simply the “happenings” in your town. Review the footage and pull your favorite clips. Assemble a montage sequence and add a piece of music appropriate to the mood and pacing of the visual elements.

2. Practice parallel editing. Create a scenario where a person is inside a location searching for something important and another person is outside, making his or her way to that location to pick up that very same important item. Cut the coverage of each individual as separate mini-sequences and get a feel for how they play back to back. Now create a new sequence using parallel-editing (cross-cutting) techniques with straight cuts – nothing fancy. Watch this new sequence for pacing. Does it generate tension and suspense? If not, how could you tweak the shot selection and the timing of the shots to make it more suspenseful?

3. Use a duplicate version of your final cross-cut sequence from Exercise 2 and see where you can create effective split edits or L-cuts on the audio tracks. Then color correct each clip in the sequence so that it has good contrast and a neutral or balanced color spectrum.

Chapter Seven – Quiz Yourself

1. What information is represented by the following timecode: 01:06:37:14?

2. Do you think a dissolve could be considered composite editing even though it may occur on a single video track? If so, why?

3. This may depend on your editing software, but can rendering an effect clip in your timeline generate new media on your media drive? If you change something about that rendered video clip, will it “unrender?” Will it delete that newly created rendered media file?

4. What are the two most common colors used for creating chromakey transparencies in video production?

5. What does it mean that full HDTV is 1920 x 1080?

6. Should you first address the contrast (blacks, whites, and grays) of a video image or the color values when grading the shots in your timeline?

7. True or false: most digital still images that you import will have the exact same frame dimensions as the video format in your editing project.

8. What is a PIP and when and why might you employ it in a video project?

9. How might the technique of cross-cutting help to show a story more dynamically?

10. True or false: “video scope” is just another term for an old TV set.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset