Chapter 2
fig2_31_1
Motion Tracking Elusive Patterns

Motion tracking, when it works well, allows you to impart the motion contained within a live-action piece of footage to a new element, such as a 3D render or bitmap artwork. However, this requires some pattern within the footage to be trackable (that is, distinct and unoccluded for the entire duration). If the camera or subject moves aggressively, such a pattern can be difficult to find. Nevertheless, there are tricks you can use to tease out something usable.

This chapter includes the following critical information:

Motion tracking terminology

Common motion tracking problems and solutions

“Hopping” from one pattern to another

Pre-processing the footage and tracking alternate channels

Applying 3D camera trackers

Motion Tracking Overview

Motion tracking is the process by which the inherent motion within footage is detected and then reapplied to another element. Matchmoving is another common term for this process. Tracking is shorthand for motion tracking; as such, you can track footage. Trackers are effects or tools that undertake tracking.

A motion tracking pattern is a unique arrangement of pixels with specific colors and intensities that moves through a frame over time. A pattern may be a complete object, like a book, or a portion of an object, like the corner of a building. A pattern may move independently of the real-world camera. For example, the pattern may be a thrown baseball, a speeding car, or the hand of a gesticulating actor. The pattern may move within the frame because the real-world camera was moving. For example, an otherwise static statue moves within the frame because the camera was tilting and panning. Tilting refers to up/down camera pans. Panning refers to left/right camera motion.

Motion tracking has several different variants. These include transform tracking, corner-pin tracking, stabilization, 3D camera tracking, and planar tracking. Transform tracking detects left/right (X) and up/down (Y) motion of a pattern within the frame. Corner-pin tracking detects the motion of four corners of a rectangular pattern and is used to detect perspective shifts in rectangular objects (phone screens, billboards, windows, doorways, and so on). Stabilization uses transform tracking data to remove the motion contained within the tracked footage; this is useful for removing camera shake and jitter. 3D camera tracking tracks myriad points within a frame, arranges those points within 3D space, and generates an animated 3D camera that attempts to replicate the original, real-word camera. Planar tracking tracks patterns as if they are rectangular planes; planar trackers are able to track patterns that breach the edge of frame.

Motion paths are generated by trackers when they track a pattern. A motion path is a series of keyframes that store the position of an element (such as a pattern) on each frame of a timeline. Transform trackers generally employ some type of feature or pattern region and a search region. A feature/pattern region defines the pattern of pixels that is to be tracked. A search region is a larger region that the program goes to when it’s unsure of the pattern’s location. These regions are usually drawn as rectangles or boxes. When transform tracking, you can also detect rotational and scale changes to the pattern by using two feature regions. The pattern may change in size if the camera moves closer or further away or undergoes a focal length change through a zoom. Hand-tracking is a loose term for manually updating a tracker’s motion path to improve its quality.

Motion Tracking Problems and Solutions

Table 2.1 lists common problems encountered when motion tracking footage and potential solutions for creating successful tracks.

Table 2.1

Problem

Solution

No patterns

At first glance, footage may be devoid of trackable patterns (for example, a moving actor against the green screen).

Pre-process the footage to tease out a pattern. For example, aggressively increasing the contrast may reveal subtle wrinkles, cracks, or stains in a background.

Pattern shifts

The pattern you want to track undergoes significant deformation or suffers from changes in lighting.

Track a nearby pattern that doesn’t suffer from the same problems. When it comes time to apply the tracking data, it may be necessary to offset the data.

Track a different channel. Many trackers allow you to track RGB or specialized channels such as luminance or saturation.

If all other approaches are exhausted, hand-track the pattern.

Pattern disappears

The pattern crosses the edge of frame or is temporarily occluded by other objects.

Use a planar tracker, which is more tolerant to a portion of the pattern crossing the frame edge or becoming occluded.

Hand-track the portion of the timeline where the pattern is lost. Many trackers allow you to hand-position individual motion path keyframes.

“Hop” from one valid pattern to the next. Many trackers give you the option to meld together the motion patterns of multiple tracked patterns.

Indistinct pattern

Heavy motion blur, grain, or noise confuses the tracker.

Pre-process the footage before tracking. For example, sharpen the footage to reduce motion blur. Alternatively, blur the footage to reduce the noise. Some trackers have built-in functions for blurring and sharpening.

Track a different channel. Many trackers allow you to track RGB or specialized channels such as luminance or saturation.

XYZ motion

The real-word camera used to shoot the footage moves aggressively, where there is significant motion on all three axes.

Switch to a 3D camera tracker. 3D camera trackers are not dependent on one, two, or four pattern windows but automatically track hundreds of points. As such, a small number of points that are occluded or go off frame do not affect the overall tracking. Whereas transform trackers do not understand changes in perspective, 3D camera trackers examine the parallax between points to synthesize the 3D environment and 3D camera.

Tracking Challenge A

Figure 2.1 shows three frames from the 96-frame image sequence Highway.##.png. Because motion tracking can have many different goals, it’s important to establish what type of tracking should be applied to this footage. The goal with this challenge is to motion track a flag flying above a mobile home along the beach and apply the tracking data to a new, significantly larger, flapping flag. Because this only requires X and Y positional information, we can apply transform tracking. Table 2.2 describes the strengths and weaknesses of the footage in the context of this motion tracking goal.

fig2_1

Figure 2.1 Top to bottom: Frames 1, 48, and 96 of a shot featuring the view out of a vehicle window. Image sequence adapted from “Pacific Coast Highway 1 - California, Set2014” by Ana Paula Hirama and is licensed under Creative Commons Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0).

Table 2.2

Strengths

Weaknesses

Aesthetically, the shot works well as a point-of-view of a beach through a vehicle window.

No object in the foreground or middle-ground stays within the frame for the entire duration of the footage.

There is little video noise and the footage is a good resolution at 1920×1080.

Bushes occlude some of the identifiable objects, such as a hoisted flag, the mobile homes, or intricacies of the beach edge.

There is relatively little camera shake or jitter, nor are there significant changes in camera rotation.

Due to the speed of the vehicle, there is heavy motion blur in the foreground.

The exposure is reasonable with nothing overexposed or underexposed.

The sky is overcast and the atmosphere is hazy, reducing the amount of contrast. Contrast is useful for identifying trackable patterns.

Layer-Based Solution for Challenge A

You can motion track the highway footage by “hopping” between patterns. Additional hand-tracking is necessary where bushes completely occlude the beach. We can also apply luma keying techniques discussed in Chapter 1 to avoid rotoscoping foreground objects.

Transform Tracking

The first step to motion tracking in After Effects is the identification of patterns worthy of tracking and placing a feature window over that pattern. You can follow these steps:

1. Import the Project-FilesPlatesHighwayHighway.##.png image sequence. Interpret the image sequence frame rate as 30 fps. Create a composition that matches the resolution (1920x1080), duration (96 frames), and frame rate (30 fps) of the sequence. Set the composition’s start frame to 0001.

2. Play back the timeline. Note occluding foreground objects, which include bushes and sign posts. Note the small American flag that appears over a mobile home along the beach. Determine when the flag is not visible. The flag enters frame left on frame 4. The flag is partially covered on frame 19 and frame 39. The flag is completely covered from frames 87 to 92.

3. Go to frame 86. With the layer selected, choose Animation > Track Motion. A Tracker effect is added to the layer and the Tracker panel opens at the bottom-right of the program window (see Figure 2.7 later in this chapter) . The footage opens automatically in the Layer view. A feature/search region set, named Track Point 1, appears in the view as a pair of nested boxes

4. LMB-drag Track Point 1 and place it over the bottom-left corner of the flag’s blue field. When repositioning the track point, LMB-drag an empty portion between the boxes. The inner box is the feature region, which establishes the pixel pattern to be tracked. The outer box is the search region, which is the area the tracker searches when it is unable to determine the exact location of the pattern. Resize both boxes so that the feature region completely surrounds the flag (Figure 2.2). To resize a box, LMB-drag its edge lines. Larger track point boxes slow the tracking calculations. However, due to the low-contrast, blue-cast footage, larger boxes will help the tracker maintain higher accuracy.

fig2_2

Figure 2.2 Track Point 1 is resized and placed over the corner of the flag’s blue field on frame 86.

5. Click the Analyze Backward button to analyze the timeline. The Tracker calculates the movement of the pattern and lays down a motion path in the viewer with a keyframe for each frame of the timeline. You can stop the analyzation at any time by pressing the Esc key. You can move the time slider back and forth to determine the accuracy of the motion path. If the tracking is progressing too rapidly, you can analyze one frame at a time by using the Track 1 Frame Backward or Track 1 Frame Forward button. If part of the motion path appears inaccurate, go to the first frame where the inaccuracy appears, reposition the track point by LMB-dragging it in the Layer view, and reanalyze with any of the Analyze buttons (analyze in the previous direction). For example, the track point may “slip” and move to another portion of the flag. There is no penalty for making minor corrections to the track point position so long as the tracker provides the majority of keyframe positions. Be careful not to alter the position of the track point’s attach point, which is the small + at the center of the pattern box. The motion tracking will be fairly straightforward from frames 86 to 41. The tracker will most likely fail, however, from frames 41 to 37. In this section, the top of a bush partially occludes the flag. Manually position the track point for frames 41, 40, 39, 38, 37, and 36. As alternative solution, you can move the track point to a different pattern; nevertheless, we will save that technique for another portion of the timeline.

6. When you’ve created a successful motion path for frames 86 to 36 (Figure 2.3), proceed to analyze backwards from frame 36 to the beginning of the timeline. Once again, the flag is partially occluded from frames 21 to 17. Manually set the position of the track point for frames 21, 20, 19, 18, 17, and 16. Go to frame 16 and analyze backward to frame 7. If you try to analyze between frame 7 and 1, the tracker will become inaccurate as the track point center is not permitted to cross the edge of frame.

fig2_3

Figure 2.3 The motion path generated from frames 86 to 36. The keyframes are represented by small, hollow boxes. Track Point 1 is positioned at frame 36.

7. Go to frame 6. Alt/Opt-LMB-drag Track Point 1 to a new pattern. Pressing Alt/Opt while LMB-dragging allows the track point to “hop” while an offset is applied to the motion path. In this situation, when the motion tracking data is applied, the motion path will extend continuously from the track point’s original position. When you Alt/Opt-LMB-drag, a zoomed in view of the pattern box is displayed; if this is not apparent, press Ctrl/Cmd+Z to undo and try again. (The zoomed in view will fail to appear if you are repositioning an existing keyframe.) As for a new pattern, choose a high-contrast feature of a mobile home at the right side of the frame (Figure 2.4). With the new pattern chosen, track backward to frame 1.

fig2_2

Figure 2.4 Track Point 1 is “hopped” from the flag to the top of a mobile home by Alt/Opt+LMB-dragging. Note that the track point’s attach point stays at the track point’s original position (with this figure, the attach point + rests on top of the flag at frame left).

8. With the motion path from frames 1 to 86 complete, go to frame 87. The flag is partially occluded. Manually position the track point for this frame. Go to frame 88. The flag disappears. Go to frame 93. The flag is once again visible. Manually position the track point. Analyze forward to frame 95. On frame 96, the flag has left the frame. We will add a final motion path keyframe after the motion tracking data is applied to a different layer. A gap now exists in the motion path from frames 88 to 92. The motion path is not broken but receives interpolated, in-between values generated by the corresponding animation curves. This not detrimental to this project, however, as the flag disappears completely behind the bush and its exact position is not visible. When the tracker analyzes the footage, it lays down a keyframe for the Feature Center and Attach Point properties, as carried by the tracker in the layer’s Motion Trackers section (Figure 2.5). Feature Center stores the XY screen location of the track point. Attach Point is the pivot point of the track point and appears in the view as the +. By default, Feature Center and Attach Point values are identical. You can move the Attach Point by LMB-dragging the + in the Layer view, although this is not as useful an option for this project.

fig2_3

Figure 2.5 Keyframes laid down by the tracker. Note the gap from frames 88 to 92.

Continue to fine-tune the motion path until the track point follows the selected pattern accurately. You can use any combination of the following techniques:

Track forward or backward. Feel free to write over old portions of the motion path. For more control, analyze one frame at a time.

Experiment with different size feature and search region boxes. Adjusting the boxes affects future analyzation but does not change any existing portion of the motion path.

Manually adjust a keyframe when its position is slightly off. Try to reanalyze surrounding frames after you have repositioned the keyframe. (Excessive manual positioning may lead to high-frequency jitter when the motion tracking data is applied.)

Try different tracker options. To access these, click the Options button in the Tracker panel (Figure 2.6). By default, the tracker uses luminance (brightness) information when identifying the position of the pattern. However, you can switch to the RGB or Saturation channel by selecting the matching radio buttons. You can also alter the basic functionality of the tracker by changing the Adapt Feature menu to Continue Tracking, Stop Tracking, or Extrapolate Tracking. Continue Tracking allows the analyzation to continue even when the tracker’s confidence falls below the If Confidence Below property value. (Confidence is the mathematical certainty with which the correct pattern has been identified.) The Stop Tracking option stops the analyzation when the confidence value falls below the If Confidence Below property value. The Extrapolate Motion options estimates the pattern position based on prior keyframes; low-confidence keyframes are automatically deleted. The default menu option, Adapt Feature, updates the pattern values based on a prior frame if the confidence falls too low. Although the default tracker options work in many situations, it can pay to try different options with each new piece of footage.

fig2_6

Figure 2.6 Motion Track Options window with default settings.

Applying the Tracking Data

After you’ve created an accurate motion path, you can apply the motion tracking data to a different layer. You can follow these steps:

1. Return to the Composition view. Import the Project-FilesPlatesFlagFlag.##.png image sequence. Interpret the image sequence frame rate as 30 fps. LMB-drag the new sequence into the composition. Move it to the top of the layer outline. The sequence features a long flag with three tails that flaps in the wind. In addition, a thin rope extends from the flag nose. Change the flag layer’s scale to 40% to better match the size of the mobile homes.

2. Return to the Layer view. Note that the motion path may no longer be visible. Go to the Tracker panel and change the Motion Source menu to Highway.[01-96].png (Figure 2.7). Change the Current Track to Tracker 1. The motion path reappears. Change the Track Type menu to Transform. Click the Edit Target button. In the Motion Target window, change the Layer menu to 1. Flag[001-096].png. (By default, this menu is set to the layer directly above the tracked layer.) Press OK to close the window. Click the Apply button in the Tracker panel. The motion tracking data is applied to the flag layer and the view switches back to the composition.

fig2_7

Figure 2.7 The Tracker panel. Position is automatically selected. Optionally, you can select Rotation and Scale to track their namesake transformations. Selecting either one of these properties creates a second track point, Track Point 2.

3. When tracking data is applied, the transforms of the receiving layer are automatically keyframed. With this project, Position is animated. (If the Rotation or Scale checkboxes are selected in the Tracker panel at the beginning of the tracking process, the matching properties receive their own set of keyframes.) However, the layer may not appear in the correct location. By default the anchor point of the layer is moved to the motion path position. You can offset the layer by altering the Anchor Point property values. For example, set the flag layer’s Anchor Point to 722, 604. This slides the flag to the left and slightly up so that it hovers above the American flag we originally tracked (Figure 2.8). Altering the Anchor Point values does not invalidate the motion tracking. That said, the layer will always appear as if it’s located in the same 2D plane as the patterns that were tracked. In other words, altering the Anchor Point values will not make the flag appear as if it is floating over the highway or hovering over the distant ocean. Play back the timeline. The tracking is complete. Optionally, you can manually set the position of the layer for frame 96, where no motion path information exists; however, if the flag has exited the frame by frame 96, a keyframe is not necessary.

fig2_8

Figure 2.8 Top to bottom: Frames 15, 48, and 81 showing motion-tracked flag. Motion blur is activated for the flag layer and basic color grading has been applied. Flag Image sequence adapted from “Kite flying at Haulover Beach park” by Osseous and is licensed under Creative Commons Attribution 2.0 Generic (CC BY 2.0).

There are a few final steps you can take to integrate the flag with the highway footage:

Activate motion blur for the flag layer by selecting the layer’s Motion Blur switch and the composition’s Enables Motion Blur For All Layers button. The flag receives a motion blur streak appropriate for the speed the camera is traveling.

Reduce the flag layer’s Opacity to 80%. This creates the illusion that the flag is in the same blue haze as all the objects along the ocean.

Add a Hue/Saturation effect to the flag layer and desaturate the flag. This better matches the cloudy, somewhat dim lighting. (An alternative route would require color grading the highway footage to better match the un-graded flag.)

Separating Foreground Objects with a Luma Mask

At this stage of the project, the flag appears on top of the foreground objects. You can rotoscope holes into the flag so it fits behind the bushes and sign posts. You can move a new copy of the highway footage to the top of the layer outline and rotoscope it so that pieces of the bushes and posts cover the flag at appropriate frames. Either rotoscoping solution can be time-consuming. An alternative solution requires the creation of a luma mask and the use of the Track Matte feature, which we originally discussed in Chapter 1. To create a luma mask, follow these steps:

1. LMB-drag a new copy of the highway sequence and drop it on top of the layer outline. Add a Hue/Saturation effect and a Curves effect to the new layer. Reduce the Master Saturation slider of Hue/Saturation to –100. The layer turns grayscale. Add two points to the Curves effect’s RGB curve and shape it into an “S” curve to heavily increase the contrast. The goal is to separate the foreground objects from the sky (Figure 2.9).

fig2_9

Figure 2.9 A new copy of the highway footage is converted to a luma mask with Hue/Saturation and Curves effects.

2. Go to the middle flag layer and change the TrkMat menu to Luma Matte. The top layer is hidden automatically. Holes in the shape of the foreground objects are cut into the flag (Figure 2.10). Manual rotoscoping is not required.

A chart detailing the layer workflow is illustrated by Figure 2.11. The project is included with the tutorial files as Project-FilesaeFilesChapter2HighwayChallengeFinal.aep.

fig2_10

Figure 2.10 Close-up of flag placed “behind” the sign pose with the luma mask technique.

Track Composition

1. Highway sequence: Hue/Saturation > Curves (TrkMat source)

2. Flag sequence: Hue/Saturation (TrkMat applied)

3. Highway Sequence: Tracker

Figure 2.11 Composition, layers, and effects used in After Effects. The composition is shown as a box. Layers are numbered and written in bold type. Effects are written in order, with the left-most effect at the top of the effect list.

Node-Based Solution for Challenge A

You can motion track the highway footage by “hopping” between patterns. Additional hand-tracking is necessary where bushes completely occlude the beach. We can also apply luma keying techniques discussed in Chapter 1 to avoid rotoscoping foreground objects.

Transform Tracking

The first step to motion tracking in Fusion is the identification of patterns worthy of tracking and placing a pattern window over that pattern. You can follow these steps:

1. Create a new project. Choose File > Preferences and set the resolution to 1920x1080 and the frame rate to 30 fps. Import the Project-FilesPlatesHighwayHighway.##.png. Set the Global End Time to 95. Connect a Loader to a Display view.

2. Play back the time ruler. Note occluding foreground objects, which include bushes and sign posts. Note the small American flag that appears over a mobile home along the beach. Determine when the flag is not visible. The flag enters frame left on frame 3. The flag is partially covered on frame 18 and frame 38. The flag is completely covered from frames 86 to 91.

3. Go to frame 85. Choose Tools > Tracking > Tracker. Connect the output of the Loader to the input of the Tracker. Connect the Tracker to a Display view. A green pattern region, named Tracker 1, is shown in the Display view. Place your mouse over the pattern region box. A larger, dotted-lined search region is revealed. (The pattern region turns red after you manipulate it.)

4. LMB-drag Tracker 1 and place it over the bottom-left corner of the flag’s blue field. When repositioning the tracker, LMB-drag the dot at the top-left of the pattern region box. Resize the pattern region and the search region so that the pattern region completely surrounds the flag (Figure 2.12). To resize a region box, LMB-drag its edge lines. Larger regions slow the tracking calculations. However, due to the low-contrast, blue-cast footage, a larger region will help the tracker maintain higher accuracy.

fig2_12

Figure 2.12 Tracker 1 is resized and placed over the corner of the flag’s blue field on frame 85.

5. With the Tracker tool selected, click the Track Reverse From Current Frame button in the Tools > Trackers tab (second button from the left). See Figure 2.13. The Tracker calculates the movement of the pattern and lays down a motion path in the Display view with a keyframe for each frame shown as a green vertical line on the time ruler. You can stop the analyzation at any time by pressing the Esc key or pressing the Stop Tracking button. You can display the keyframe boxes along the motion path by selecting the Show Key Points button in the toolbar the Tracker embeds in the Display view (this button is fifth from the toolbar bottom). If the embedded toolbar is not visible, LMB-draw a selection marquee around the motion path so that it turns red.

fig2_13

Figure 2.13 The Trackers tab of a Tracker tool with default settings.

6. The initial tracking may cause the pattern window to slip off the flag. By default, the Tracker tool uses a non-adaptive tracking system based on the luminance of the frame. To switch to an adaptive system, click the Every Frame button under the Adaptive Mode parameter. To track a different channel, select or deselect the small channel buttons below the channel bars along the left side of the Selected Tracker Details section (see the bottom of Figure 2.13). For this tutorial, deselect all the channels except blue (the blue channel carries the greatest contrast).

7. Return to frame 85 and track backward using the Track Reverse From Current Frame button. The motion path updates. However, even with the updated Tracker settings, the Tracker 1 pattern window most likely slips off the flag. Hence, it’s necessary to pre-process the footage to increase the likelihood that the Tracker will succeed. In the Flow view, move the Loader tool away from the Tracker tool so there is a larger gap between them. RMB-click on the Loader-side of the connection line so it turns blue and choose Add Tool > Color > Brightness / Contrast from the context menu. A Brightness / Contrast tool is automatically inserted between the two tools.

8. Show the blue channel in the Display view by pressing the B key while the view is selected. Select the Brightness / Contrast tool and adjust its parameters to maximize the contrast within the channel and isolate the flag (Figure 2.14). For example, set Gamma to 1.4, Contrast to 1.5, and Brightness to –0.05.

fig2_14

Figure 2.14 Readjusted pattern and search regions. The blue channel is displayed. The contrast is increased with the Brightness / Contrast tool.

9. Now that we’re tracking an adjusted blue channel, we can readjust the Tracker 1 pattern and search regions. Select the Tracker so the region boxes are visible. While on frame 85, move the pattern box so that it’s centered on the dark star-field of the flag. Scale the pattern box smaller so that it tightly surrounds the field. At the same time, scale the search box so that it’s larger (Figure 2.15). These steps force the Tracker to concentrate on a smaller pattern while giving the Tracker a larger area to search for a pattern if it’s lost.

10. Analyze backwards from frames 85. The motion path should be significantly improved. If part of the motion path remains inaccurate, go to the first frame where the inaccuracy appears, LMB-drag the keyframe box to the correct position, and reanalyze with any of the Track buttons (analyze in the previous direction). There is no penalty for making minor corrections to the keyframe positions so long as the Tracker tool provides the majority of keyframe positions. Note that a small green axis handle is displayed in the center of the keyframe that matches the current frame of the time ruler. The axis handle offers a convenient means of determining the current frame when looking at the motion path. (With Fusion 8, the Tracker region boxes may not update their position as you stop and start play back, which can be confusing.) If the keyframe boxes are hidden, you can update the current keyframe position by LMB-dragging the green axis handle.

11. The Tracker will fail, once again, from frames 40 to 36. In this section, the top of a bush partially occludes the flag. Manually position the keyframes for frames 40, 39, 38, 37, 36, and 35. If necessary, return the Display view to the Color view (press C while the view is selected). The altered color cast caused by the Brightness / Contrast tool will be removed in a later step. As an alternative solution, you can move the track point to a different pattern; nevertheless, we will save that technique for another portion of the timeline.

fig2_15

Figure 2.15 Motion path created by the Tracker for frames 85 to 36. The motion path keyframes are displayed as small hollow boxes by activating the Show Key Points button. Note the relative smoothness of the motion path. The green axis handle is located at the far-left of the motion path. The blue channel is displayed.

12. When you’ve created a successful motion path for frames 86 to 36 (Figure 2.15), proceed to analyze backwards to the beginning of the time ruler. Once again, the flag is partially occluded on frames 18 and 17. Manually set the position of the track point for frames 18, 17, and 16. Go to frame 16 and analyze backward to frame 3. The Tracker will not function properly between frames 2 and 0 as the flag leaves the frame. Go to frame 3. Click the Track Center (Append) button under the Path center parameter in the Trackers tab of the Tracker tool. LMB-drag Tracker 1 by the top-left dot of the pattern region and place it on a new pattern that stays in the frame. For example, move Tracker 1 so that it sits on a high-contrast vent on top of a mobile home in the center of frame. Click the Track Reverse From Current Frame button. The Tracker follows the new pattern to frame 0. When the analyzation is completed, the original motion path extends itself out of frame as if following the flag. The Track Center (Append) function understands that the original motion path is to be updated with an offset pattern. Thus, you are able to “hop” from one pattern to the next. You can return to the standard, non-append mode by clicking the Pattern Center button and continue the analyzation.

13. With the motion path from frames 0 to 85 complete, go to frame 86. The flag is partially occluded. At this stage, no keyframes exist from frames 86 to 95. As such, you cannot reposition the Tracker 1. To add keyframes, click the Pattern Center button and analyze forward to frame 95, allowing the Tracker to create an inaccurate motion path. Return to frame 86 and update the keyframe position to follow the flag. Go to 92 and update the keyframe position. From frames 87 to 92, the flag is completely occluded by a bush. You can delete the keyframes at frames 87, 88, 89, 90, and 91, which creates interpolated, in-between positions for Tracker 1. To delete motion path keyframes, Shift-select keyframe boxes in the Display view and press the Delete key. You can also LMB-drag a selection marquee around the boxes in the Display view. To identify which frame a keyframe belongs to, let your mouse hover over the keyframe box. The frame number is displayed in a small yellow dialogue.

14. Go to frame 92. Click the Track Center (Append) button. LMB-drag Tracker 1 by the top-left dot of the pattern region and place it on a new pattern that stays in the frame. For example, move Tracker 1 so that it sits over the high-contrast window on a vehicle in the center of frame. Analyze forward to the end of the time ruler. The motion path is appended as if the Tracker was following the flag out of frame. The motion path is now complete and continuous for the entire duration of the time ruler (Figure 2.16). Return to the RGB view in the Display view. Disable the Brightness / Contrast tool by selecting it in the Flow view and pressing Ctrl/Cmd+P. The colors of the footage return to normal.

fig2_16

Figure 2.16 Complete motion path. Note the keyframe gap from frames 86 to 91.

Applying the Tracking Data

After you’ve created an accurate motion path, you can apply the motion tracking data to a different part of the tool network. You can follow these steps:

1. Create a new Loader. Import the Project-FilesPlatesFlagFlag.##.png image sequence. Connect Loader2 to a Display view. The sequence features a long flag with three tails that flap in the wind. In addition, a thin rope extends from the flag nose. The alpha channel appears incorrect with blue noise around the flag. Go to the Tools > Image tab and select the Post-Multiply By Alpha checkbox. The transparency is corrected.

2. Create a new Merge tool. Connect the output of Loader2 to the Foreground of the Merge tool. Connect the output of Loader1 to the Background of the Merge tool. Connect the Merge tool to a display view. Use Figure 2.18 later in this chapter as reference. At this stage, the flag is too large. Select the Merge tool and change tool’s Size value to 0.4. Reduce the Blend slider to 0.7 to give the flag partial opacity and allow the blue haze to show through.

3. Play back. The flag stays in the frame center and does not move. To apply the motion tracking data, select the Tracker tool, switch to the Tools > Operation tab, and click the Match Move button under the Operation parameter. This instructs the tool to prepare its output for transform tracking. Select the Merge tool. RMB-click over the Center parameter name and choose Connect To > Tracker 1 > Unsteady Position. This connects the XY positional information created by the Tracker’s motion path to the Center parameter. Play back. The flag now moves to match the motion of the flag in the highway footage. However, the flag may not be in a desirable position within the frame. To offset the flag but maintain the motion tracking, insert a Transform tool (Tools > Transform > Transform) between the Loader2 tool and the Merge tool. Select the Transform tool and use the interactive transform handle to move the flag. For example, place it to the immediate right of the American flag (Figure 2.17). Play back. The motion tracking continues to function. To activate motion blur, select Merge1 and go to the Tools > Common Controls tab (with the radioactive symbol), select the Motion Blur checkbox, and set Quality to 10. You can reduce the length of the motion blur trail by lowering the Shutter Angle value.

fig2_17

Figure 2.17 Final position for the flag, as seen on frame 48. Motion blur is activated. Shutter Angle is set to 90.

Separating Foreground Objects with a Luma Mask

At this stage of the project, the flag appears on top of foreground objects. You can rotoscope to prevent this occlusion. An alternative solution requires the creation of a luma mask. Follow these steps:

1. Create a Color Corrector tool (Tools > Color > Color Corrector). Connect the output of Loader1 to the input of the Color Corrector tool. Do not disturb the prior connections. Use Figure 2.18 as reference. Connect the Color Corrector tool to a Display view. Change the tool’s Master – Saturation slider to 0. The image becomes grayscale. Increase the Master – RGB – Contrast value to 5 (you can enter a number into the parameter field). The contrast is greatly increased. This serves as a luma mask.

2. Create a Channel Booleans tool (Tools > Color > Channel Booleans). Connect the output of the Color Corrector to the Background input of the Channel Booleans tool. Connect the new tool to a Display view. Set the Channel Booleans tool’s To Alpha menu Red BG. The red channel of the Color Corrector tool is transferred to the alpha channel of the Channel Booleans tool’s output. Connect the output of the Channel Booleans too to the Effect Mask input of the Merge1 tool. Connect the Merge1 tool to a Display view. Go to a frame where a sign post or a bush should occlude the flag (such as frame 34 or 58). The flag is properly covered by the foreground. Manual rotoscoping is not required.

The final tool network is illustrated by Figure 2.18. A chart detailing the node workflow is illustrated by Figure 2.19. The project is included with the tutorial files as Project-FilescompFilesChapter2HighwayChallengeFinal.comp.

fig2_18

Figure 2.18 The final tool network. The Brightness / Contrast tool is disabled, as is indicated by the dark gray icon. The transform tracking information is transferred to the Merge tool’s center parameter via a Connect To option, which does not produce a visible connection line in the Flow view.

fig2_19

Figure 2.19 Simplified representation of the Fusion node network. Colored boxes represent branches of the network. Specific tools are listed under each branch with low-numbered tools existing farther upstream. The purple arrow represents the connection of the luma mask to an Effect Mask input. The dashed arrow represents the Connect To passing of motion tracking data.

Tracking Challenge B

Figure 2.20 shows three frames from the 128-frame image sequence Steps.##.png. Because motion tracking can have many different goals, it’s important to establish what type of tracking should be applied to this footage. The goal with this challenge is to motion track the stone steps and apply the tracking data to a bitmap logo. Because the camera is hand-held and moves in the Z direction, 3D camera tracking is necessary. Table 2.3 describes the strengths and weaknesses of the footage in the context of this motion tracking goal.

Table 2.3

Strengths

Weaknesses

Fairly good contrast with good separations between the light and shadow sides of steps along with various cracks in the stones

No single object or pattern stays within the frame for all 128 frames.

Camera does not pan aggressively left or right but moves forward consistently.

The camera is hand-held and moves in the Z-direction (the axis running through the lens). There are bumps and jitters as the camera person walks with the camera.

Video noise and compression are fairly light.

The resolution is only 1280×720. Lower resolution material is more difficult to track due to the likelihood that patterns will shift or disappear over time.

fig2_20

Figure 2.20 Top to bottom: Frames 1, 64, and 128 of a hand-held shot of a stone steps. Image sequence adapted from “St John’s Steps – Uneven steps – Church of St John the Baptist – Bromsgrove – from St John Street – HD video clip” by Elliott Brown and is licensed under Creative Commons Attribution 2.0 Generic (CC BY 2.0).

Layer-Based Solution for Challenge B

You can motion track the footage of the steps in After Effects by applying 3D camera tracking. This style of camera tracking automatically detects numerous trackable patterns within the footage and extrapolates 3D positions for each pattern based on their relationships with one another. This data is then used to create a 3D camera that replicates the original, real-word camera. Instead of applying the tracking data to a different layer, the layer is treated as a 3D object that is seen by the new camera in 3D space (the new camera is automatically animated).

3D Camera Tracking

After Effects includes the Track Camera function, which applies the 3D Camera Tracker effect. To use this effect, follow these steps:

1. Create a new project. Import the Project-FilesPlatesStepsSteps.###.png image sequence. Interpret the image sequence frame rate as 30 fps. Create a composition that matches the resolution (1280×720), duration (128 frames), and frame rate (30 fps) of the sequence.

2. Play back the timeline. Note that the Z-axis motion of the camera and the lack of consistent in-frame patterns makes the footage unsuitable for transform tracking with a Tracker. With the footage layer selected, choose Animation > Track Camera. A 3D Camera Tracker effect is added. The effect begins to analyze the footage. This is indicated by a blue banner in the Composition view that reads “Analyzing.” The effect examines the footage forward and backward. At the completion of the analyzation, the effects “solves” the camera to determine the 3D positions of tracked points. When the effect has completed the tracking, a number of small x-shaped tracking points appear in the Composition view (Figure 2.21). If you play back the timeline by LMB-dragging the time slider, you can see how accurate the tracking is. Some track points disappear as their patterns leave frame or disappear due to motion blur. The disappearance of points is not detrimental as hundreds of points are generated. You can adjust the size of the points by altering the effect’s Track Point Size property. Points that are perceived to be closer to the camera are drawn larger while points far from the camera are drawn smaller.

fig2_21

Figure 2.21 Track points added by the 3D Camera Tracker effect, as seen on frame 55. These are only visible if the effect is selected. Track Point Size is set to 200%.

3. Although the tracking may be successful with default settings, it pays to experiment. You can stop the analyzation by clicking the 3D Camera Tracker’s Cancel button. You can re-analyze at any time by clicking the Analyze button. By default, the effect assumes that the camera has a fixed lens with an unknown focal length. (A fixed lens a non-telephoto lens that does not allow the focal length to change.) This setting is suitable for this shot. However, if you are working with a different lens, you can change the Shot Type menu. Menu options include Variable Lens and Specify Angle Of View. While Variable Lens is appropriate for shots where the focal length changes, Specify Angle Of View allows you to enter a Horizontal Angle Of View value. (An angle of view value is related to a focal length size but is dependent on the sensor size or film aperture size.) Another important property is Solve Method, which is located in the Advanced section (Figure 2.22). By default, Solve Method is set to Auto Detect, which allows the effect to extrapolate the type of camera motion. This may or may not produce accurate tracking results. You can also set the menu to Typical, Mostly Flat Scene, or Tripod Pan. Tripod pan is suitable for shots where the camera is mounted on a tripod and only pans left or right without moving along the Z-axis. Mostly Flat Scene will work for shots where there is relatively little Z-axis motion but the camera is not fixed to a tripod. Typical is appropriate for hand-held scenes where there is motion in all directions. To improve the initial tracking with this shot, change the Solve Method menu to Typical and leave Shot Type set to Fixed Angle Of View. (When you change these menus, the effect automatically re-solves the camera.) To reanalyze the footage, click the Analyze button.

fig2_22

Figure 2.22 The 3D Camera Tracker efect before analyzation.

4. After analyzation and camera solving is complete, play back the timeline by LMB-dragging the time slider. The new settings improve the tracking where there is less slipping and jitter among the track points. Place your mouse in the Composition view. Note that a bullseye target is displayed when you move the mouse between points. This represents the plane between the nearest three track points. The target provides an interactive means to gauge the accuracy of the camera solve. For example, if you place your mouse between the points on frame 1, you can see that the target is aligned with the ground plane formed by the fat section of stones (Figure 2.23). If you place your mouse along the left or right wall in a later frame, you’ll see the target roughly aligned with the wall stones in a vertical manner. The alignment will not be perfect for every cluster of points. Nevertheless, if the alignment is roughly correct, the motion tracking will work.

fig2_23

Figure 2.23 A bullseye target is displayed between sets of three points when the mouse is placed in the Composition view.

5. At this stage, there is no 3D camera. You can create one by clicking the effect’s Create Camera button after the analyzation and solving is complete. This creates a new camera layer named 3D Tracker Camera. The camera exists in the After Effects 3D environment and is animated moving and rotating. To see the camera and its motion path switch the Select View Layout menu from 1 View to 4 Views (Figure 2.24). The camera is drawn with a red icon that includes a small camera body and a large pyramidal frustum (the area of view). If you select the camera layer, the motion path appears. Play back the timeline by moving the time slider. The camera moves and rotates. To adjust the orthographic views within the 4-view layout, LMB-click in a view to select it and MMB-drag to track left/right/up/down and rotate the MMB scroll wheel to zoom in and out. You can also use the camera control buttons along the main toolbar. If you select the Unified Camera tool button, LMB-drag tracks and RMB-drag zooms. Because we are motion tracking, you’ll want to avoid interactively changing the perspective view. By default, the top-right view of the 4-view layout shows the active camera. However, you can change this view to the 3D Tracker Camera by selecting that view and changing the Composition view’s 3D View Popup menu to the camera name.

fig2_24

Figure 2.24 The Composition view is set to a 4-view layout. The 3D Tracker Camera, with its motion path, is displayed. The camera is shown on frame 0. Note that the motion path rises up in Y over time, mimicking the camera person walking up the steps.

Applying 3D Tracking Data

Unlike 2D transform tracking, 3D camera tracking requires that you add a new element to the 3D environment so that is seen by the generated 3D camera. You can follow these steps:

1. Import the Project-FilesArtLogo.png. This is a bitmap image with transparency that features a company logo. LMB-drag the logo from the Project panel to the timeline. The logo layer is placed on top of the layer outline. The logo is shown over the footage of the steps but does not react to the 3D camera. Click the 3D Layer switch for the logo layer (this features a small cube). If you don’t see the switch, click the Toggle Switches / Modes button at the bottom of the layer outline. As soon as the logo layer is converted to a 3D layer, it’s seen by the 3D camera. Play back. The logo appears to be hovering over the foreground steps.

2. You can change the logo layer’s Scale and Anchor Point values to move it into a position to line up with a particular step. However, it can be difficult to make an accurate alignment. Another solution allows you to attach a new layer to a particular tracking point. To do this, return the Select View Layout menu to 1 View. Select the steps layer and select the 3D Camera Tracker effect. The tracking points become visible again. Select a point to which you’d like to attach the logo layer. For example, go to frame 70 and select one of the points on the lower fight of steps (Figure 2.25). You can select a point by LMB-clicking it so that it’s drawn with a yellow circle. RMB-click over the circle and choose Create Null. A null 3D layer is created. The null is placed in the 3D environment at the same location as the selected point. The null does not render but does carry a set of transforms.

fig2_25

Figure 2.25 A track point is selected, as is indicated by the yellow circle. This point “sits” on a step.

3. To transfer the positional values of the null to the logo, you can manually copy the Position XYZ values. You can also set a keyframe and copy the keyframe values. To do this, return to frame 0. Set a keyframe for the null layer’s Position by clicking the Time icon beside the property. Select the new keyframe on the timeline (the small diamond). Press Ctrl/Cmd+C to copy the keyframe. Go to the logo layer and select the Position property name. Press Ctrl/Cmd+V. The keyframe is pasted to the logo’s Position. The logo moves to the same position as the null. The logo remains too large. Change the logo layer’s Scale to 35%. To make the logo appear as if it is sitting on a step, change the logo layer’s Anchor Point Y value to push the pivot to the bottom of the logo (and thus raise the logo up). Play back the timeline. The logo appears to sit on a step and undergoes scale and perspective changes appropriate for the camera (Figure 2.26). You can fine tune the angle of the logo by adjusting the logo layer’s X, Y, and Z Rotation values. To add motion blur, select the logo layer’s Motion Blur switch and select the composition’s Enables Motion Blur For All Layers button.

A chart detailing the layer workfow is illustrated by Figure 2.27. The project is included with the tutorial files as Project-FilesaeFilesChapter2StepsChallengeFinal.aep.

fig2_26

Figure 2.26 Top to bottom: Frames 28, 66, and 128 showing the motion tracked logo going through positional, scale, and associated perspective changes.

Track Composition

1. Logo.png (Receives Position values from null layer)

2. Track Null 1 (Added by 3D Tracker)

3. 3D Tracker Camera

4. Steps sequence: 3D Camera Tracker

Figure 2.27 Composition, layers, and effects used in After Effects. The composition is shown as a red box. Layers are numbered and written in bold type.

3D Camera Tracking in Fusion

Fusion does not carry a 3D camera tracking tool. However, you can import a 3D point cloud and a solved 3D camera generated by another program, such as The Pixel Farm PFTrack or Voodoo Camera Tracker.

If you import a point cloud file through the PointCloud 3D tool (Tools > 3D > PointCloud 3D), points are arranged in Fusion’s 3D environment. The points represent locations on the real-life location or set that were tracked (equivalent to the x’s displayed by the After Effects 3D Camera Tracker). You can import a 3D camera through the FBX Mesh 3D or Camera 3D tools (also in the Tools > 3D menu).

The Foundry Nuke, another node-based compositor, provides the CameraTracker plug-in, which functions in a manner similar to the After Effects 3D Camera Tracker. Both Fusion and Nuke require a more complex set-up to make the 3D environment functional. Tools or nodes that generate or import 3D geometry, create or import 3D cameras, merge the 3D elements, and render the combined 3D are required.

Final Motion Tracking Thoughts

When tackling difficult motion tracking situation, consider the following:

Look at Other Channels Consider tracking a different color channel, such as blue, or different specialized channel, such as luminance or saturation. Each channel will produce unique tracking results.

Pre-Process Adjust the footage before tracking. For example, add blurs, sharpens, or color effects/tools to affect the contrast.

Hop between Patterns If no single pattern is available for the duration, hop between patterns. Each tracking effect or tool has a unique feature that allows this.

Use Other Trackers Experiment with all the trackers or tracking systems available to a program. They each have their strengths and weaknesses and may become useful in specific situations. For example, switch from transform tracking to corner pin tracking or planar tracking. Consider using 3D camera tracking if there is significant camera motion.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset