Chapter 12

Video Manipulation

Talking about Kubrick’s 2001 in the last chapter leads us on very well to consider the topic of video manipulation, full as the film is of many visual and video effects.

Box of Delights—It’s Playtime, Guys!

Let me first define the difference between visual and video effects. Visual effects (commonly shortened to Vid FX or VFX) are basically special effects that can be created in front of the camera and therefore include stuff like explosions, crashes, fire, water, rain, snow, and so on. Video effects (also commonly shortened to Vid FX or VFX, hence the confusion) include all forms of video manipulation performed on the video signal outside the context of a live-action shoot, and therefore after the camera has stopped turning. That phrase ‘outside the context of a live-action shoot’ almost certainly means you and your edit suite. The terms visual and video are becoming increasingly interchangeable today as more and more effects involve the integration of live-action footage (or visual effects) with computer-generated imagery (or video effects) in order to create composite images which would simply be impossible to photograph directly. I think I have confused myself now!

The Name of the Game—Let’s Get the Terminology Right First

Let’s make a start on the area that most concerns us—video effects. Video effects have become increasingly common (if a little over-relied on) in many recent movies. With the introduction of affordable animation and compositing software, even the more complex of these techniques has become accessible to the amateur filmmaker. However, these are increasingly specialised areas of filmmaking and are somewhat outside the remit of this book.

Instead, we will concentrate on simpler effects that you will regularly need in your editing work.

This chapter is divided as follows:

12.1 Motion Effects

The Fast Show—Speeding Pictures Up

The technique of speeding pictures up is most often seen in daytime or children’s programmes where it’s commonly used to make the programme more ‘cool’ and perhaps more visually interesting than it really is. Constantly changing the speed of a sequence is a lazy way of editing, and it is usually forced on you by the absence of cutaways. However, I do accept that, because of the nature of the material, the more conventional way of editing such footage is out of the question. I suppose a smarter alternative would be jump cuts.

Although I am not a huge fan, because a quick-cut montage can look so much better than simply using the accelerator pedal, I can see speeding pictures up is a useful (and certainly easier) technique in some productions.

House of Cards (2013)—Sometimes It Works!

I’ve been a bit hard on fast forward, but all that changed when I saw the opening titles of the American version of House of Cards (2013), starring Kevin Spacey. That is indeed the way to do it! To be fair, I suppose this technique is more accurately described as ‘time-lapse’ photography, rather than the more crude description of just speeding the pictures up. The huge contrast between the fast images of clouds, traffic, and trains zooming by and the slow movements of the camera and the sun’s shadows works very well.

It’s Marty (BBC TV) (1968–9)—Everybody Back on the Coach!

I’m rapidly losing my argument about speeding pictures up with another example of just how well they can work sometimes—here to create laughter. Take a look at Marty Feldman in ‘The Lightening Coach Tour’ from his BBC series It’s Marty (1968–9).

I remember seeing this sketch when I was still at school and all of us raving about it the next day. The speeded-up chatter of the coach party with Marty’s repeated cries of ‘Wait for me’ as they tear around the countryside to the seaside and back is hilarious. After they’ve been to the pub for lunch, the repeated roadside ‘reliefs’ is a gem of a sequence. You can clearly see the influence of Feldman’s hero, Buster Keaton, in this sketch.

The series was produced by the great Dennis Main Wilson, who I was fortunate enough to work with (however briefly), and it was written by Marty Feldman and Barry Took. The coach tour operator was played by John Junkin.

The Benny Hill Show—Okay, I Give In!

Right from the days of the Keystone Kops, comedy performers and directors have used the technique of undercranking the camera to produce ‘funnier’ speeded-up pictures. As an example, where would Benny Hill be without a speeded-up closing sequence of him being chased by several scantily clad young ladies? Ah! The innocent (or guilty!) days of the 70s. Every technique has its use, I suppose.

I think this must be every editor’s blind spot, given that we see speeded-up pictures every day while searching for that next perfect shot, we don’t find them at all funny anymore.

The Day the Earth Stood Still—Slowing Pictures Down

Slowing pictures down is done for many more stylistic reasons and will certainly produce much more creative results. Where would sport coverage be without several slow motions of different viewpoints of a just-scored goal?

When I started at the BBC, there was only one machine capable of slow motion. Made by the Ampex Corporation, it was a magnetic disc-based machine that was only able to capture and process a total of 36 seconds of material. On a ‘Match of the Day’ football night, all the editors working on the programme would queue up (myself included) to offer our normal-speed material to ‘the disc’ and one by one would insert the now slowed-down material back into a prepared ‘hole’ in the match edit. The joys of linear editing! The commentators at the match that afternoon had to commentate blind over an imaginary replay and we would, now several hours later, have to try to fit a slow motion of the action to match these words. But hey, in the words of the coal-eating Monty Python sketch, ‘We were happy!’

Today, with high scan rate cameras, slow motion footage of sporting moments can look stunning as individual blades of grass part when an 8-iron swishes by or beads of sweat fall from clashing heads in the penalty area.

Slow motion can also be used in a less analytical mode and instead create a romantic or suspenseful atmosphere in order to punctuate a moment in time. Used well, you can create a depth to pictures which was never present in the same shots played at normal speed.

I have to say that slowing down pictures shot at only 25 FPS can have disappointing results; it just depends on their content. Software and the computer’s ability to ‘invent’ frames in between the scanned ones is always improving, but just beware of the technique’s limitations.

Blackadder Goes Forth (1989)—Over the Top

Allow me to join Kubrick, Stone, and Welles, if only for a moment, in an example of my own here with the closing sequence from Blackadder Goes Forth (1989), where our heroes emerge from their trenches for the final time. Slow motion and long dissolves turned what was, to all accounts, an undershot scene into a heart-stopping moment rarely found in audience sitcoms. Alongside me in the edit suite that morning were Richard Boden (director), John Lloyd (producer), and Nial Brown (my assistant at the time). We had barely 10 seconds of useable material to fill 40 seconds of Howard Goodall’s music, which was designed to close the series. Slow motion was an obvious choice, I grant you, but what we didn’t expect was that suddenly, with extreme slow motion (about 6–10 FPS) the majesty and horror of such an event came to life (sorry, a strange choice of words, but you’ll forgive me). Explosions that at full speed looked like typical studio visual effects, spraying polystyrene in all directions, started to look like the real thing—dangerous, frightening, and capable of doing real harm—and our heroes were in the thick of it.

With the colour drained out, long and slow vision mixes joined the now slowed-down action together and got us to a shot of no-man’s land, hauntingly empty after the battle. Actually, even this shot was not used as intended, as I grabbed it from before the actors were cued into action. A visit by Nial to the BBC stills library produced several shots of a countryside covered in poppies. When I tried one of them, the hairs on the back of my neck literally stood up as our studio shot of the desolation of trench warfare in 1917 just seemed to melt into modern day tranquillity—the choice had been made. A drift back to colour to reveal the blood-red poppies, and with faint birdsong added after the end of the music, we were there.

In those four hours, we created a sequence that has become iconic and much talked about for all the right reasons. The edit suite used back then, comprised of four timecode-controlled, one-inch Ampex analogue videotape machines, was capable of slow motion (obviously), but in this mode the timing of the slow motion was unprogrammable and unpredictable, the speed being controlled manually by crude linear faders. Chinagraph markings and many hands (yes, even the production team) on the BBC’s Electra editing panels brought the sequence to life. Because of the unpredictability of the slow motion and the generational restrictions of analogue machines, it had to work, to a large extent, in one pass, using all four machines carrying the different pictures, each being cued and run in turn.

A PERSONAL CONNECTION FOR ME TO THE FIRST WORLD WAR. HERE IS THE GRAVE OF MY GREAT-UNCLE, VERNON C. WADSWORTH, WHO DIED WHEN HE WAS 18, LYING WITH HIS FALLEN COMRADES FROM 1916 IN FLEURBAIX, NORTHERN FRANCE.

They Think It’s All Over!—Knowing When to Stop!

This brings us to a very important point that I can’t stress enough. It could be the most important point in this book!

The technology that we used to create the closing sequence of Blackadder Goes Forth had naturally imposed a point at which we were forced to stop. Nowadays, there is no such technological limit, and modern software will allow you to build layer upon layer, effect upon effect, with no limit whatsoever. It is totally up to you to know when to stop.

Just as an artist or composer has to eventually walk away from the work he or she is creating, so an editor has to know when to leave a sequence alone.

The key point is that today we might have easily ruined that Blackadder sequence for the crass and simple reason that we could.

BBC’S ELECTRA EDITING PANEL, SIMILAR TO THE ONE ON WHICH THE CLOSING SEQUENCE OF BLACKADDER GOES FORTH WAS EDITED.

The Cruel Sea (1953)—A Nice Bit of Movie Trivia

Before we leave motion effects, I rather like this bit of movie trivia which shows that, even when there has been a mistake and a shot has been missed or incorrectly filmed, there is sometimes a solution in the edit suite.

We have already looked at this sequence from The Cruel Sea (1953) when I was examining movie examples of action sequences, but I recently saw a clip of Peter Tanner, the film’s editor, talking about this incident, and I think its retelling is worthy of inclusion.

In the aftermath of the depth charge attack, there is a harrowing shot, apparently taken from the stern of Compass Rose, and all you see are the remains of some life jackets and other debris floating on the now calm surface of the sea, nothing else.

Tanner relates that after the filming had been completed, it was discovered that Charles Frend’s directions for this shot, for he had not been with the unit, had been misunderstood. The shot was mistakenly taken from the filming ship’s bow, with the camera looking forward and therefore heading toward the debris, thereby totally destroying the poignant intention of the shot. A reshoot was not feasible, given it had to perfectly match with what was already filmed. Luckily, the problem was solved in the cutting room by running the film backwards.

Brilliant! Tanner’s solution had worked! However, if you look carefully at the shot in the movie, you can see that the seagulls are flying backwards.

12.2 Keying

Here I look at the ways multiple video layers are used to create captions, graphics, and title sequences.

More bad news I’m afraid, just as before when we looked at sequences we first had to study individual shots, here we have to get a few technical definitions out of the way before we get to the fun.

Through the Keyhole—Video Used as a Switch or Key

A key is a video signal that acts as a controlling switch between other video sources.

Thus, the key, or foreground picture, can be made to cut a hole in a background picture.

The source of the key can simply be the level of the foreground signal itself; this is known as luminance keying. Alternately, you can use a colour contained in that foreground picture, and this is (not surprisingly) known as chroma keying.

Sometimes the controlling key is a separate source of video, and this is referred to as an external key or matte.

Let’s look at all these types of key in turn.

12.3 Types of Key—Luminance, Chroma, and Matte

Cutting Edge—Luminance Keying

Often used in simple caption work, luminance keying only uses two video layers in your timeline. Here, the caption or title graphic (placed on layer 2 or V2) uses itself, or more precisely the video level of itself, to cut a hole in the background picture (placed on layer 1 or V1) and fills that hole with itself or a colour.

The key can have a border (simply cut a bigger hole), or coloured border (fill the enlarged hole with a colour), or shadow (cut a bigger hole and offset it slightly), or soft edges (mix between the hole edges and the background), or opacity (mix between the filled hole and the background).

An effect that was once popular, mainly because it was one of the very limited camera-based effects in the early days of TV, is creating and using a ‘negative’ version of a picture which is then mixed, or luminance keyed, with the original. This can easily add a weird and frightening look to a shot or scene.

In this way, keying pictures with themselves can produce wonderful effects, especially if some adjustment to the size of the picture is made, thus creating a video howl-round not unlike pointing a camera at a monitor, à la the original Doctor Who titles of 1963.

The Green, Green Grass—Chroma Keying or Green Screen

Chroma keying or green screen is a technique where a colour, or a small range of colours, is removed from a foreground image and these regions replaced with an alternative image. Here, the colour contained in the primary image acts as our switch (or key) to display a secondary image. This technique is also known as colour separation overlay (CSO) primarily by the BBC, but even they are coming round to the near universal terms of chroma key or green screen. Strangely, the term green screen is still used even if green is not the colour controlling the key.

Why green? Well, it’s the colour furthest away, in spectral terms, from any hue contained in the average human face. Blue and sometimes yellow can also be used depending on costume requirements because, if the actor wears green clothes and green is the keying colour, then his clothes will also be replaced with the background picture. Modern software is very good at being selective between ‘green screen’ green and other greens in the foreground, but to be on the safe side, the colour controlling the key is usually chosen to be different from any other foreground colour.

The problem editing green screen sequences is that an older computer’s speed can slow down as it processes that second stream of video. If you are on older equipment, edit the foreground action first with all that green, and when you are happy with the sequence, put in the intended backgrounds, adjust the key, hit render, and go for lunch. Come back after lunch, and change everything.

Here’s an exercise to prove you can do it: Exercise 52: Chroma Key.

Exercise 52: Chroma Key

Shots Involved:

1 WS 2S (12)

2 WS H’s House (03)

3 W2S rise (12)

4 Green TVs (16)

Dialogue:

NONE.

Exercise Aim:

The aim of this exercise is to put the three pictures into the three TVs.

Hint: TVs with green screens go on V4 and the three other pictures on V3, V2, and V1.

Then use ‘Picture in Picture’ to create a composite of V1, V2 and V3, which is presented to the V4 keying layer.

Don’t bother too much about aspect ratios and cropping, just get the pictures keying well.

Questions:

How did you get on?

Answers:

It’s a software thing really, but it will give you practice arranging the timeline in the correct manner before you are called upon to do this for real.

You can of course stretch this exercise out and make the pictures in the TVs turn off, one by one, to a bright line across the middle and then go to a dot as they used to. Ask your grandparents.

No, I didn’t bother either!

Outside Edge—External or Matte Keying

When an external key or matte is used, three layers (or streams) of video are involved—the background on V1, the fill (or foreground) on V2, and the key, switch, or matte on V3. Matte keying is more commonly used in higher end graphic sequences where a separate key or matte is provided to cut a hole in addition to a foreground that fills it.

An example of this in its simplest form might be a shaped strap, filled with a graphic, over which a name super can be written.

There is not much I can say about the technique of editing these sorts of effects together, other than to say they have to deserve their inclusion like everything else. Graphics can often cost the production a great deal of money, but this shouldn’t cloud any judgement on their final removal, if that becomes necessary.

Right, we are out of the classroom again, you’ll be glad to hear.

THE AVID EFFECTS PALATE.

12.4 Graphics

Having talked about keying, let me say a few words about graphics. Today in TV, I would say 90% of all graphics work is done in the edit suite, and an editor is more and more called upon to write, type, design, and animate graphics as few programme budgets can afford a specialist graphics person any more, and certainly not for any length of time. The style may have been chosen elsewhere, but ‘you know who’ will have to make it work. As with other aspects of editing, it’s wise for you to have a few favourite styles up your sleeve so that you can quickly call on one of these when the question of captions or graphics arises.

Look and Read—Dealing with Captions

For most of the time, caption work is limited to adding a few name supers or maybe a closing roller. Easy stuff, I know, and not particularly creative, especially if you’re on programme 11 of series 4. Occasionally, you’ll have to be more inventive, particularly when you ask your production about the graphics for the programme, and you get that ‘Ooh, we’ve not thought about that’ look!

Well, now’s your chance. Simple static superimpositions can be improved dramatically if you build in some subtle movement either to the size and position of the whole caption or to the spacing of individual letters. By altering the letter spacing, the spread of the caption can be seen to settle in frame, which can look very stylish.

THE BORIS TEXT APPLICATION WITHIN FCP.

True to Type

Font choice is also an important factor that can give your captions great individual style on the screen, but the choice should always reflect the mood of the programme concerned. Hopefully, none of you would put a fancy handwritten font on a hard-hitting programme about prison violence!

Try using coloured lines to underline words; it’s quick to do, and it can also look very smart. However, these lines should not be too thin to cause vertical jitter on interlaced pictures. You also might consider making the caption write itself on the screen like a typewriter, or animate the letters one by one, or just simply focus the caption up as it settles into position.

Your choice of software will help or hinder here. Some title software is very easy and intuitive to use, and some just too complex for occasional use. The problem is, if the software is too complex, you’ll forget how to drive it between uses and probably go back to a simple title tool and do your best.

Font choice is largely a matter of personal taste, but get to know what you like, especially if you’ve not considered the choice of graphic styles before. Once you know what you like, you’ll be better able to defend or modify your design when the inevitable doubts from the production team arise.

Head to Head—HD versus SD

This point about HD versus SD is slightly out of date today, with most productions shooting in HD (or higher), but it’s still worth mentioning in passing.

When graphics are provided for you, there is the possibility of a clash between the low-res interlaced TV world and the high-res graphics world. Sadly, some graphics originated on high-end, high-res machines don’t always look as good on an SD TV screen. Good designers know this and work around the restriction accordingly. However, sometimes stuff comes to you that looks really dreadful when you put the artwork on your TV monitor, and all eyes look at you as if you were to blame. The remedy is to get involved in the design process as soon as possible so that a prototype version can be inserted into the programme to reveal any limitations the TV system imposes on the artwork at an early stage.

The advent of HDTV has eased the problems, but remember, some people are still looking at the broadcast output on standard definition CRT TVs, so for the moment, you have to satisfy both worlds.

The Front Page— raphic Safe Ar

Many broadcasters put limitations on where graphics can be placed on the screen. A programme made today for an aspect ratio of 16:9 may still have a 4:3 caption safe area limitation, and that’s way in! See Mickey’s test cards in Chapter 15. Safe area charts are provided by the broadcasters, and these images can be imported into your project and used as guiding templates for caption insertion. Different broadcasters, software, and TV manufacturers have varying ideas of what ‘safe’ actually is, so beware. Generally, your broadcaster is right, especially if they are the ones that have to technically pass your programme.

You might very well have to resize imported graphics in order to comply with these broadcasting limits. The trouble is that as you reduce the graphics in size, you might render small-sized fonts unreadable. We are back to the HD/SD clash again. Always check the rules concerning caption insertion with your friendly individual broadcaster.

20/20 Vision—Graphic Readability

The golden rule when dealing with graphic readability is for you to slowly read out every word of a completed caption twice and aloud. The reason for this is that you know exactly what the caption says, as you presumably have just typed it in, but the viewer does not.

When you add graphics to your timeline, it is good practice to time the arrival and departure of your caption to significant moments in your cut. This can be the vision or the sound. For example, listen to the speech over which there is to be a name super, and hear in your mind the ideal place for a ‘fade-up’ and a ‘fade-down’. This is best done at play speed, marking the ‘up’ and ‘down’ points with your ‘in’ and ‘out’ markers. A good starting point for the duration of any name super is between three and five seconds, depending on how many words are superimposed with the person’s name—a job title, for example.

Whether you should put up a caption immediately after the appearance of the person concerned or wait a couple of seconds is totally up to you, but whatever you decide, be consistent with this timing throughout the programme.

It’s important never to put a caption or graphic over a person’s mouth; it just looks bad and might prevent those viewers who are hard of hearing and good at lip-reading from picking up what is being said. Also, try not to have to place the caption over several shot changes; however, I do understand this may be unavoidable in many circumstances.

Sign Zone—Keep Your Backgrounds Clean

As a quick note while we are talking about graphics, a practice I started, and since then has been claimed by most broadcasters as their own idea, is to put the clean backgrounds over which you have superimposed captions or title work at the end of your master tape or delivered file. This way, when the inevitable ‘spellink mistoke’ is discovered, you can easily correct it and not have to go searching through dozens of original recordings to find the background concerned.

Sadly, ‘spellink mistokes’ are all too often discovered once the nonlinear material is gone or has been superseded by a graded version, and if that’s the case, you’ll be so grateful you spent those extra seconds putting the clean graded material on the back end of the delivered programme.

Most broadcasters want you to leave a minute of black and silence after a programme ends for emergencies, so put your clean elements on after that.

12.5 Multiple Images

The use of multiple images on the screen can work very well without making the viewer suspect that mutton is being dressed up as lamb. Contestant profiles in game shows which provide information about the individuals can easily be built up from multiple images, or multiple versions of the same image.

I saw a good example of this in the BBC’s recent coverage of the Leeds International Piano Competition. Here the pianists’ profiles consisted of an enlarged version of the mid-shot of the individual, which was graded blue, slowed down and defocused, and over which was superimposed the original mid-shot footage of the smiling contestant along with some stats about his or her career—it looked very good.

A Point of View—Images Turned in to Windows.

Multiple images can make TV programmes look much more like a Windows-based display on a computer. Different views of the same subject can be made to run simultaneously in separate areas of the screen; a frontal mid-shot can be paired alongside a profile mid-shot or long shot, which can also provide the background. Done with nonlinear software this is very easy, as the different pictures are put on different layers in the timeline and reduced in size using ‘picture in picture’ (or something similar) to form a composite picture.

Flog It!—Stretching Out Limited Resources

Some low budget documentaries spend the first half of the programme telling you what’s coming up later, and then when it is later, reminds you of what happened in the first half when they were telling you what was coming up later! The net gain is nearly zero. This situation is more common if the programme spans advert breaks, where part one ends with a trailer for part two (to keep you watching), and part two starts with a massive recap of part one.

An editor on this type of production will be asked to come up with interesting ways to disguise the fact that material may have to be stretched and used more than once.

Here are some examples of what you might consider.

Imagine you only have two photographs of a couple of people or objects. Even with only two stills, it is surprising how you can stretch these photos into an interesting sequence by manipulating the images in various ways.

Try some, or all, of the following on those photographs:

  • Reduce the size of the images, put them against a colour or a graphic, and move them around the screen.
  • Reveal the images by turning them about an axis, X or Y.
  • Use gentle zooms to highlight some individual feature or features.
  • Cut out the principal foreground and separate it from the background and treat them differently with regard to focus, grading, or size. In this way, a zoom into the shot can be made to look three dimensional. I grant you this does take time to achieve properly, but it can look good.
  • Reveal the images by flying them in like rolls of paper, or darts, or pages in a book. There are many possibilities with a little programing effort.
  • Put various sizes of the same image on the screen at once against a colour or a graphic.
  • Crop out details from the image, separate them, and make a collage.
  • Reveal the images by wiping them individually onto the screen.
  • Put flashes of white between different sizes of the same image as they appear and disappear on the screen.
  • Colour or tint the images differently (Warhol like?) and place these different versions on the screen together.
  • Defocus one image and use it as a coloured background for the untreated version of that same image, to be layered above.
  • Defocus one image and bring the other into focus as the first disappears.
  • Slide the images along in the style of an old slide projector.

Please Sir—In Conclusion

The list above can go on and on, but I hope I’ve given you some idea of the possibilities. For extra inspiration, all I would do is look at the examples that are used on the box today. If there is anything you like, adapt the style for your current or future project.

12.6 Multilayer Video Sticking Plaster

U.F.O.—The Removal of Unwanted Flying Objects

As much of my source material is still recorded with boom microphones, I probably have to deal with these low-flying objects more than most. Happily, techniques to remove these booms can be applied to all such uninvited flying objects (UFOs) that enter frame from time to time.

The techniques for removing these UFOs fall into three main categories:

  • Resize.
  • Wipe.
  • Picture in picture.

Supersize Me—Resizing Images

With resizing, the overall picture is simply enlarged to exclude the offending object. This is very easy to do, but with standard definition (SD) pictures you are limited to a size increase of between 5% and 10%, depending on the source material. Beyond the 10% barrier, the picture becomes too soft and the line structure too visible. You can hide this a bit if you are able to introduce this resize gradually over the course of a shot, and especially if you can piggyback on the natural movement of a camera’s framing. Release from such a resize can and should be done in a similar way, or on a shot change.

HD pictures can cope with a larger amount of zoom but even they, beyond about 20%, start to break up.

Total Wipeout—Wiping Out UFOs

For the wipe out technique to work, it is essential for the offending object to get out of the way at some point and reveal the background it temporarily obscured, preferably without any adjustment of the camera’s framing. If this is the case, you are able to freeze the picture before (or after) the invasion of the UFO and use this freeze on a second timeline video layer to repair the problem. A shaped wipe is placed around the affected area, and you can simply cut out this area of UFO invasion and replace it with a frozen clean background.

A couple of points: first, the freeze should be a true ‘freeze frame’, in other words, using both fields of an interlaced image; and second, this technique only works well if the camera is stationary while the boom is in shot.

If the camera moves during the shot with the offending boom, then the frozen element will have to be moved and tracked to match the movement of the camera.

Good camera operators will know not to adjust their framing to get rid of the dipping boom and rather let it get out of the way on its own, because they know if they move the camera it will ruin the potential repair.

You’ve Been Framed—The ‘Picture in Picture’ Technique

If the movement of the camera is too great to replace the UFO with a freeze wipe, it might be possible to use another bit of the picture to cover up the offending object, and this is what I refer to as the picture in picture technique. For example, if the boom comes over some curtains, it might be possible to duplicate a lower section of the curtains and move it up, to act as sticking plaster, and remove the boom. Here again, a second layer of vision is created on the timeline. This video layer carries the sticking plaster, now moved to suit the repair. Draw around the UFO and turn this into a soft-edged wipe, which will restrict the correction layer to be visible only where you want it.

The advantage of this technique is that both pictures, sticking plaster and original, move in sync with each other geographically. The disadvantage is that correction can only cope with a small range of camera repositions. At this stage you might decide to try resizing as described previously, or look at another take.

Coming to our rescue here is cleverer drawing and compositing software, with even cleverer operators, who can work wonders when the problem lies beyond the capabilities of either edit suite software or us!

12.7 Colour Correction

I have called this section colour correction rather than grading as I feel there is a huge difference between the two operations. As an editor, I would say that, unless you specialise, you will be colour correcting pictures rather than grading them. In starting to talk about colour correction, it is worth asking yourself if you have perfect colour vision. If you haven’t been checked, do so. Those of you who suffer from severe colour blindness would be advised to leave the more complex aspects of colour correction to others. Level adjustment is okay, like making a shot appear darker to match its neighbours, but that’s it.

The Colour Receiver Installation Film—Some Colour Fundamentals

As you probably know, colour TV uses only three colours: red, green, and blue (RGB). The eye is effectively fooled to see a range of colours by mixing together different proportions of these three primary colours. TV white is made up of red, green, and blue light in the proportion: 0.3 red + 0.59 green + 0.11 blue. So green does most of the work when a monitor displays a white raster. Thus, with old CRT monitors and TVs, it was always the green gun that wore out first, because it was taking more beam current than the other two over its lifetime.

37 AND 4, RIGHT?

Being Human—You Can’t Believe Your Eyes

Another problem with human beings is that our eyes, or more accurately the signal processing in our brains, don’t see absolute colours, but rather see colour in relation to the colours that surround. If you don’t believe me, just try duplicating and moving the yellow colour bar from a standard set of colour bars to the other end (near the blue bar) and you’ll see two completely different yellows on display, even though they are electrically exactly the same. One yellow will appear more of a lemon-yellow compared with its duplicate. Go on, try it—colour bars on two video layers, ‘picture in picture’ applied to the top layer. Crop the yellow bar and move it along to where I said, and you’ll be amazed, it will actually change colour as you move it across.

THIS MAY NOT WORK AS WELL WITH THE PRINTED COLOURS IN THIS BOOK, BUT YOU SHOULD SEE THAT THE TWO YELLOW BARS HAVE DIFFERENT TINTS.

The Black and White Minstrel Show—Don’t Colour My World

Because our eyes are capable of playing tricks on us, professional grading rooms are painted to be completely colour neutral, and in addition, they have a source of true ‘grey’ white light to act as a reference. It’s the same problem we had monitoring sound properly, in that the room and monitor must be as perfect as possible. The ‘grey’ reference acts as a palate cleaner to reset the grader’s eyes in case the programme starts to drift off in a particular spectral direction.

You’ve all experienced the effect that, after staring at a single colour for a while, when that colour is removed from your view everything looks strangely coloured until your eyes have had a chance to settle down again. I am not suggesting you get a paintbrush out to your edit suite, but that orange wallpaper has to go before you start to seriously grade your pictures.

Chuckle Vision—Lies, Dammed Lies, and What Monitors Display

Monitors are capable of telling lies. They effectively have colour correction built in to them, and this must be lined up to reproduce accurate monochrome pictures before you can use them to alter the colour content of your pictures.

It is essential that your picture monitor be checked for what engineers call grey-scale tracking.

A MONITOR MENU WITH RGB ADJUSTMENTS.

Here, the monitor is balanced (RGB gain and RGB backgrounds) to match a grey-scale light box. Once set, most modern monitors will stay grey. So gone are the days of waiting an hour or so (I’m not kidding) for valved monitors to warm up and stop drifting.

A GREY-SCALE CAMERA LINE-UP CHART AND, ON THE RIGHT, PICTURE LINE-UP GENERATING EQUIPMENT (PLUGE).

Along with your picture monitor, the computer display should also be checked because, if white here is being displayed slightly pink, for example, then this will ruin the neutral surroundings we are trying to create.

Do Not Adjust Your Set—Hands Off the Menus

I hope I’ve made you realise that it might be better to leave all this to those who know.

All of the above should serve as a huge warning, and that’s to only attempt any colour correction for real when you know what you’re doing.

Generally, programmes are professionally graded before transmission, but this is by no means universal. Some colour correction work will have to be done in the edit suite, but hopefully this will be limited to getting pictures to match a little better for a viewing, rather than starting from scratch and grading the whole programme prior to transmission.

It Ain’t Half Hot Mum—There Are Limits You Know!

Colour correction controls are, for the most part, intuitive. RGB adjustments are usually represented graphically, and the response of each colour is individually controlled from dark to light. The best thing to do is to try it all out.

Rather boringly, the result you’ll produce, especially if you are livening things up, will usually be totally untransmittable. We’re back to level limitations, I’m afraid, similar to those we dealt with in connection with sound. Thankfully, most software can be set to warn or limit such transgressions so that you don’t produce illegal colours. Yes, it’s that bad; and you’ll find that there are plenty of police around to check that you stay on the level. Technical reviews and quality assessment reviews (QARs) are ruthless and will gladly fail (for thus it seems) any level transgression.

I say all this so that you’re armed and ready; it can get pretty rough out there.

Sorry—I Have a Confession

I have a serious confession to make, and that is the colour blindness numbers a few pages back of course should have been 62 and 3.

Edge of Darkness—Colour Correction: Terms of Engagement

For the most part, an editor will only be required to smooth over and match one picture to another, but here’s a list of some of the parameters available in colour correction software.

  • Saturation—Any area of colour is increased or reduced. Turning this right down will produce a black and white image.
  • Lift/brightness/set-up—With this control you can black crush darker elements to render them invisible, or lift them out of the gloom.
  • Gain/level—Same as contrast on a TV. Blacks are unchanged (mostly), but lighter elements are amplified or attenuated. The limit of such amplification is peak white (0.7 volts above black).
  • Gamma—This alters the otherwise linear response from black to white, and is more commonly used to enhance or reduce subtleties near black. Gamma can be altered for each colour separately, so very quickly your pictures can start to look somewhat impressionistic, and worthy of inclusion in a Tate Modern art exhibition.
  • Hue—A hangover from the NTSC colour system, which could introduce hue errors (red changing from magenta to green), and this control corrected such errors. The control can change the feel of the colour content without affecting the rest of the image. It can be selective to the level of the signal it affects, that is the shadows, middles, or highlights. Try it.
  • Clip high—Means what it says, it puts a limit on the whitest white or the reddest red and so on.
  • Clip low—I’ll leave that to your imagination.

South Pacific (1968)—Fantasy Grading

We need an example of using creative grading from the movies, and the Rodgers and Hammerstein musical South Pacific (1958) provides a good one. It was directed by Joshua Logan and starred Rossano Brazzi and Mitzi Gaynor, and it was edited by Robert L. Simpson.

During Bloody Mary’s song ‘Bali Hai’, the pictures are graded in a more and more extreme way in order to reinforce the effect her song, about the mysterious powers of the island of Bali Hai, is having on Lt. Cable (John Kerr).

The images start off by having their saturation increased, with, at the same time, an overall decrease in their detail, but this may be due to the extra optical filmic processing the images had to go through to produce the colour effects in the first place. Intense golden colours give way to more extreme reds and blues, sometimes changing in vision and varying in hue across the screen. In addition, a misty, out-of-focus vignette is used to soften the edges of the frame. I must say the effect works best here to heighten the mystery of Bali Hai, but works less well with other songs in the movie. Indeed, when we visit Bali Hai, all the pictures are treated for saturation and hue, which becomes somewhat tiresome over the length of the scene. What have I said about overusing an effect just because you can?

Pennies from Heaven (BBC TV) (1978) and The Singing Detective (BBC TV) (1986)—Shall We Dance?

From the TV world, Pennies from Heaven (1978) and The Singing Detective (1986), both written by Dennis Potter, offer excellent examples of pictures being graded and lit differently, again when a song was called for.

There’s a scene in episode two of Pennies from Heaven where Arthur (Bob Hoskins) is pleading for a loan from his stuffy bank manager (Peter Cellier), and it must be said negotiations are not exactly going well, when suddenly the song ‘Without That Certain Thing’ by Roy Fox and His Band is heard, and the bank manager’s office is transformed into a theatre set, complete with footlights.

Lighting and the balance of the cameras are altered to increase the effect of the change. Both Bob Hoskins and Peter Cellier do well in this impromptu dance routine. I know this was before the days of post-production grading as such, but it serves as an example of the technique of picture manipulation to suit or generate a mood.

You could also search for the scene from episode one where Joan (Gemma Craven) is at home entertaining girlfriends Irene (Jenny Logan) and Betty (Tessa Dunne), and they suddenly start performing ‘You Rascal You’ by the Blue Lyres.

As an example from The Singing Detective, search for ‘Dry Bones’ for another marvellous fourth wall–breaking song and dance routine.

You forget how good these series were!

Play Misty for Me—Making Your Pictures Just Look Better

Let’s assume you’re nearly finished editing, and you and your director now have important clients to impress, and your pictures could do with some smartening up. Generally, pictures from a location will be shot to sit in the middle of the available range of levels, so you could try applying a global ‘lift down, gain up, saturation up’ colour correction to your timeline and tweak the shots that now look worse.

If you were really keen, a very slight edge vignette could help give depth to some exteriors, especially those with flat featureless skies. It is amazing how this can improve the look of some shots. It can also be used to bring out your foreground characters from their backgrounds, but be careful you are nearly grading the show now.

12.8 Effects—Those You Should Carry with You

From time to time, I have mentioned settings and effects that you should have up your sleeve if things turn nasty. Here again the same is true. For a start, keep a copy of the settings of your favourite effects on your USB stick in the form of a folder or a bin. This can be easily copied from project to project and will give you a head start with any new effects.

In addition to these software settings, there isn’t much I’d carry with me picture wise, except for things like that old favourite, the scratched film loop. I can’t remember how many times I have had to degrade modern video footage to make it look as though it had been on a cutting room floor for weeks. When you find a good bit of scratched leader or a film countdown, make a QuickTime movie of it and keep it.

Incidentally, I did this for the square-bashing titles for Blackadder Goes Forth. After the series was transmitted, the production got a letter from a member of the public saying that he was pleased to see that the BBC was, at last, running these archive films at the correct speed! I don’t know what he thought Rowan Atkinson was doing there in that footage. Still, one happy customer at least, we didn’t have the heart to tell him.

THE AVID MEDIA COMPOSER COLOUR CORRECTOR WINDOW.

Other regularly used video effects include the ‘record’ viewfinder graticule, with line corners and a ‘REC’ graphic superimposed, or a binocular effect over a long shot. Both of these, once generated, are bound to be called on again and again, so you might as well pop them onto your USB stick.

The Sky at Night—Adding Weather to Shots

Things like a rain loop, snow, clouds, or a shot of the moon can be useful to store as JPEGS or QuickTimes, and you’ll get loads of Brownie points when you add the moon to that already fabulous night exterior, which will justify some of the light used in the filming.

You’ll soon build up a library of such material most appropriate for your kind of work.

12.9 Key Points—Video Manipulation

  • Visual effects (commonly shortened to Vid FX or VFX) are special effects that can be created in front of the camera, such as explosions, crashes, fire, water, rain, and snow.
  • Video effects (also commonly shortened to Vid FX or VFX) include all forms of video manipulation performed on the video signal outside the context of a live-action shoot.
  • Changing the speed of a sequence is a lazy way of editing, and it is usually forced on you by the absence of shot cutaways.
  • Comedy performers and directors have used the technique of undercranking the camera to produce ‘funnier’ speeded-up pictures.
  • Slowing pictures down can produce attractive and creative results.
  • A key is a video signal that acts as a controlling switch between other video sources.
  • The source of the key can simply be the level of the foreground signal itself; this is known as luminance keying.
  • If a colour contained in that foreground picture is used as the key, this is known as chroma keying.
  • If the controlling key is a separate source of video, then this is referred to as an external key or matte.
  • Some graphics originated on high-end, high-res machines don’t always look as good on an SD TV screen.
  • Different broadcasters, software, and TV manufacturers have varying ideas of what ‘caption safe’ actually is, so beware.
  • You might have to resize imported graphics in order to comply with broadcasting limits.
  • The golden rule when dealing with captions is for you to slowly read out every word of a completed caption twice and aloud.
  • Never put a caption or graphic over a person’s mouth; it just looks bad, and it might prevent those viewers who are hard of hearing and good at lip-reading from picking up what is being said.
  • Put clean backgrounds over which you have superimposed captions or title work at the end of your master tape or delivered file.
  • Contestant profiles in game shows and the like, which provide information about the individuals, can be built up from multiple images or multiple versions of the same image.
  • Zooming in on SD pictures is limited to between 5% and 10%, depending on the source material. With HD pictures you can’t go much beyond 20%.
  • Before you start seriously grading your pictures, the room and your monitors have to be perfect and telling the truth.
  • Quality assessment reviews (QARs) are ruthless and will gladly fail any video level transgression.
  • Copy the settings of your most used video effects onto your USB stick. These can include colour corrections, vignettes, fonts, caption styles, or layouts for multiple images.
  • Collect useful video elements like scratched film leaders, rain, the moon, or a ‘REC’ viewfinder graphic.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset