10   

OTHER WORKFLOW CONSIDERATIONS

VIRTUAL STUDIO TECHNOLOGY

Dan Novy

Virtual studio is a term that describes a loose collection of technologies used to achieve real-time mixing of live-action foreground elements with a chosen background—real or created inside a computer. The technology is used currently for newscasting, entertainment news shows, and children’s broadcasting. It is a way of creating much grander backgrounds than can physically be built on the budget available. Additionally, it has been used on film sets to help the director and director of photography (DP) set the shots for scenes that will take place in a virtual background. Regardless of the system used, all virtual studios capture, process, mix, and render the final output in a similar manner.

Initially, the foreground element—a live actor, puppet, and the like—must be captured against an environment capable of being keyed in real time. Often a set that has some general set pieces painted chroma blue or green is used, similar to the screens or environments used for visual effects bluescreen compositing (see the Greenscreen and Bluescreen Photograpgy section in Chapter 3 and the Compositing of Live-Action Elements section in Chapter 6). The obvious difference concerning the virtual studio is that the keying, or creation of an alpha matte, must occur in real time. Unlike software-based compositing, specialized hardware from various vendors is capable of keying and outputting the video stream at the chosen frame rate.

The second necessity of the virtual studio is real-time camera tracking and positioning, sometimes known as matchmoving. The virtual camera created within the CG world must synchronize its position, orientation, and lens with the camera used to capture the foreground element—again in real time. Several systems are available that use various methods, including inertial tracking, infrared beacons similar to motion capture systems, tracking markers, or camera encoding hot heads1 capable of outputting the cameras’ orientation and/or position. Most solutions will employ a mixture of these approaches. (See Chapter 4 for further explanations of these technologies.)

Once a matchmoved and keyed foreground element has been captured and processed, it must be mixed, again in real time, with the chosen background environment. This background can either be photographic, captured from another location, or as in most cases, created in a 3D animation package. Unlike software compositing, the elements must be mixed and rendered in real time, for which several hardware-based solutions exist or can be constructed. Once mixed, the output stream can be broadcast live or saved to an offline storage medium. It is important to note that the matched movement of the created background to the foreground photographed elements is automated once the system has been set up and calibrated.

After the initial cost of creating the capture environment, including camera matchmoving, keying, and the specialized hardware necessary for real-time mixing, has been expended, the cost of the virtual studio can be kept low by having only to construct new and varied environments in 3D space—without the time and construction costs associated with real-world set design and building. The space and system can be repurposed for multiple shows or projects without incurring additional costs.

ANALYSIS OF A PRODUCTION WORKFLOW

Stephan Vladimir Bugaj

What is a workflow? Creation of a production workflow starts with analysis and design. To do the analysis, it is important to have an understanding of what a workflow is. A workflow is a specific set of procedures and deliverables that defines a goal. The overarching workflow is the total production workflow. The goal is make a movie, and that necessitates a set of procedures that results in the deliverable: the film. Beneath that is a series of departmental workflows, beneath which are artist workflows, and beneath which are task workflows.

A workflow defines a procedure, or series of operations, through which a task is performed and a deliverable produced. In defining a workflow, the task is the goal-oriented view of the work, and the deliverable is the resulting definition. Operations are defined to achieve that goal. A good workflow definition is end to end and defines what the artists need to do to receive the input deliverables, perform each step of the workflow, and hand off the output deliverables.

How granular to make each operation in the definition is an art more than a science, and it requires communication among people in the organization. A level of granularity that most artists and managers agree makes sense is a good starting point. Workflows defined at a whole studio level will then be partitioned and turned into department-specific workflows, which will eventually become tool specifications and user documentation, through iterative refinement. But a good workflow definition makes these subsequent steps tractable.

In practice, a comfortable level of detail is usually one that defines steps involving context switching by the artist, not detailed operations of a tool. For example, the task “lay out UVs” could be defined as a number of small operations about how to operate a particular UV layout tool, but for the purposes of defining a studio workflow that would be counterproductive. In defining the modeling or shading workflow, there may be more steps in the workflow than just “lay out UVs.” The steps might be something more like “lay out UVs, check with grid renders, name the UV maps according to spec, etc.” than the very low-level, step-by-step details of how to do it.

This definition process often results in a workflow definition that has subdeliverables (such as status updates in a tracking database), in addition to the primary deliverable. Many operation steps could be viewed as stand-alone workflows in theory, but in practice it may be irrelevant or overwhelming to do so. Studio-level production workflows should define the process at a level of detail that is general enough to be understandable by everyone and also define major deliverables. Department-level (and specializations within departments) workflow definitions are more detailed, but should still be high enough level to look more like a to-do list than a how-to manual.

Example: A Simple Department-Level Workflow for Shading

•   Receive art from production design.

•   Receive geometry from modeling.

•   Validate geometry:

   Single-rooted hierarchy.

   No disconnected edges.

   Out-facing normals.

•   Lay out UVs:

   Use multiple maps to maintain detail as needed.

•   Set up projection paint cameras and passes.

•   Set up tumble paint passes.

•   Mark the model as “ready to shade” in the tracking DB.

•   Develop shader:

   Assign materials from library.

   Test-render initial setup.

   Refine shader to hit target artwork.

   Dial-in basic look:

–   Develop new shader library code if needed.

–   Apply textures from texture library.

–   Create new textures if needed.

–   Paint additional detail.

   Mark the model as “ready for review” in the tracking DB.

   Show in review.

•   Obtain review approval.

•   Mark the model as “shaded” in the tracking DB.

•   Install new textures into the texture library.

•   Install new shader library code into the code repository.

•   Contact downstream department(s) and deliver model.

Notice that even this simple shading pipeline example defines five delivery points prior to the master deliverable: three database entries, and two optional installs if newly shared materials are created. It assumes that the artists know their jobs and don’t micromanage their steps. Yet, it is at a level of detail that when purchasing or developing a toolset to facilitate this workflow, important details are specified: Model validation scripts need to check specific issues, the model data format must support multiple UVs, both tumble and projection paint systems are needed, etc. As the workflow is implemented, each step will turn into a well-defined procedure, but such implementation details are not needed in the workflow design. For example, how the UVs are laid out doesn’t matter, as long as the model has valid UVs. Over the lifetime of a studio, these workflow requirements will evolve, and so will the tools.

From Workflow to Pipeline

A pipeline is an implementation of a workflow specification. The term comes from computing, where it means a set of serial processes, with the output of one process being the input of the subsequent process. A production pipeline is not generally perfectly serial because real workflows usually have branches and iterative loops, but the idea is valid: A pipeline is the set of procedures that need to be taken in order to create and hand off deliverables.

The creation of a pipeline from a workflow specification is a matter of selecting and/or developing tools, and procedures for using them, that implement the workflow. A pipeline is, therefore, a workflow specification plus a set of tools that are to be used to achieve the goals defined therein. To create a pipeline, the high-level workflow specification needs to be turned into system requirements, which can also be considered a low-level workflow specification. So requirements analysis gives workflow specification and requirements, design provides a detailed plan for creating the pipeline, and the implementation of that design provides the pipeline itself.

Requirements Analysis

Requirements analysis is a systems engineering term for the process of determining the conditions that a system needs to meet and the goals users must be able to achieve by using it. The process involves taking into account the potentially conflicting needs of multiple sets of users and beneficiaries (the people who receive the deliverables from the system).

A valid requirement is one that can be defined in a testable way, is relatable to a system goal, and is defined at a level of detail sufficient for system design. A set of requirements for systems implementation will start with a high-level workflow specification and, through iterative refinement, be turned into requirement specifications for implementation.

Formalized requirements analysis is the subject of dense books and can be very complex. That level of detail is not discussed here, but rather enough details will be provided such that requirements for the studio workflow can still be developed. What any requirements document needs is the following:

•   a definition of the goal to be achieved, and the primary deliverable(s) to be produced;

•   identification of the stakeholders (users and beneficiaries);

•   a workflow specification overview of the process at a high level;

•   for each step in the workflow:

   a high-level definition of the task (the scenario),

   input requirements, meaning a definition of the data, if any, that is present to be operated on at the start of the step defined by the requirement,

   functional requirements, meaning a description of what must be done in this step, at a level of detail sufficient to implement it,

   usability requirements, meaning both user interface design and performance expectations,

   output requirements, meaning a definition of the data to be output and passed-on to the next step;

•   life cycle expectations (e.g., how long the system will be used, which impacts how much funding is spent on it).

Developing a set of requirements, if done right, can save a lot of time and money during the design and implementation stages. To do it right, it needs to be kept simple. Only requirements that an organization can actually achieve, based on what is right for its size, should be specified. The size of the organization often determines whether a system that supplies the requirements is purchased, or whether they are implemented in house. The number of people in the meetings, the number of meetings, and the length of time dedicated to the process also must scale with size. But that doesn’t mean details should be minimized. The requirements document must spell out everything system implementers need to take into account in order to design and build a system that will meet the needs of the staff.

A sufficiently simple, yet usable requirement might look something like this UV layout example:

•   Definition: UV layout is the creation of a well-defined 2D texture space that has point-to-point correspondence with the 3D mesh.

•   Users: Modeling or surfacing TDs.

•   Beneficiaries: Surfacing TDs and texture painters.

•   Process overview: Take a 3D mesh, start with an automatic mapping, and refine by hand until the grid test render looks good.

•   Scenario: Automatic mapping.

   Input: A mesh with no existing UV layout.

   Functional: Choice of several mapping techniques including cubic, spherical, and cylindrical projections, pelt-mapping (automated slicing of a 3D mesh into a 2D projection), and projection from any camera.

   Usability: Each technique should produce some result for even a naïve user without parameter tuning. Parameter tuning to produce a better result should be in a user interface (UI) that looks similar for each technique, with parameters that do the same thing having the same name and UI position. Selection of cut edges for pelt-mapping should be automatic, with a user override that uses the standard edge selection tool of the host application. Results should be displayed on a gridded 2D plane immediately. Any parameter tuning that will cause the mapping to take more than a couple of seconds should cause the UI to alert the user and ask if he or she wants to continue.

   Output: A UV map in the data format required by the texturing system. The map should be editable with the hand-editing tools specified in the next requirement.

•   Scenario: Refinement tools.

   Input: An automatically generated UV map.

   ... and so on.

Formal object modeling language isn’t always necessary. Requirements simply need to state what needs to be done, by whom, and to what effect. If it wasn’t an example, the pelt-mapping system would likely be spelled out in detail in a subordinate requirement, because it is a complex component, and the mapping and refinement tools might be broken out into one scenario per technique—but otherwise the requirements don’t necessarily need to be much more complex than the example. Systems and UI designers will take this information and turn them into designs for the developers, or if off-the-shelf solutions are to be purchased, then there will be a set of requirements and designs that can be discussed in detail with potential vendors. Stakeholders should be included in refining the requirements and designs because implementation details may cause changes, but that is an expected part of the process. If attention to detail and perseverance are used during this phase, however, there will be less frustration during implementation.

From Artistic Requirements into Technical Specs

Artistic requirements need to be translated into technical specifications. What that means to the workflow specification and requirements analysis and design process is that a definition is needed of the processes in terms of both an artistic goal and technical necessities. Supervisors make this translation with respect to a director’s vision all the time, taking an artistic desire and turning it into a technical plan, and the need is not much different when it comes to defining a studio workflow for in-house artists to turn their interpretations of the director’s and production designer’s visions into finished assets ready for delivery to the customer.

Focusing on the artistic goal is essential. The ultimate goal is making images, not building systems or writing software. Technical and usability requirements should be designed to make achieving artistic goals as simple as possible. Reducing complexity wherever possible is key. Artists want to focus on their task, not on the pipeline, and a studio can achieve very good results if their systems requirements create packages that make complex simulations, shading, and so on into flexible yet easy-to-use tools for the artists.

Part of the translation process is that artists will frequently state things in terms of what they already know. During the requirements analysis process, it is important to understand that when someone states their view about how the system needs to function, the task is to translate that view into what the system needs to achieve. Further, the developer needs to isolate what aspects of the application the user has specified are considered positive to that user, and what are merely tolerated, and then build the requirements based on that information. If the artists use an existing tool, and they are given one that has all that tool’s benefits and fewer of its deficiencies, they have been helped to progress and evolve.

For example, if the artists state that they must have the Roadkill tool or else their job cannot be done, it is important to understand that what they need is a pelt-mapping UV layout tool and to translate that into a requirement for a pelt-mapping tool. This inevitably takes into account other artists’ requirements and, therefore, may or may not be best implemented by using the Roadkill application specifically. The artistic goal is “get the best UV layout possible onto a highly complex geometry, with the least amount of time spent doing by-hand layout,” not “run Roadkill.” Time needs to be spent talking to the artists about both the specs of their deliverables and their working habits in order to glean this information. This is time well spent.

The technical specifications then developed from these artistic requirements must focus primarily on the goal (in the example, a good UV layout) and how to achieve it, rather than on implementation details (such as what particular pelting algorithms will be used and what kind of data representation those internal computations will act on). What needs to be gleaned from an artistic requirement is a technical requirement that states the artistic goal, defines what the technical deliverable is that embodies that artistic goal, and spells out both how the artists will work within the system to achieve the goal (process and UI) and what the system is doing (at a high level) to allow them to do so.

Balancing Individual versus Group Requirements

When designing a production workflow, it is often necessary to balance individual artists’ requirements and preferences with what is needed for the entire team to operate smoothly. Artists may come to the studio accustomed to a personalized workflow that doesn’t integrate well with a larger group project. Disorganization, or an organizational system that only makes sense to the individual, is often the hallmark of this situation. Many artists will attempt to cling to their personal system, even though it will not serve them well in a group environment.

Solving this problem is as much a people management issue as a requirements analysis one. Faced with a workflow that requires storing data on a server in well-defined locations organized by project and shot, and which requires good hygiene in terms of naming objects and parameters, some will complain about “wasting time” on these requirements that “don’t show up on screen.” The solution is not to allow artists to deliver incomplete and poorly organized work to downstream departments, dumping additional work onto other artists. Requirements must be carefully considered to minimize the number of steps artists must take to accomplish their individual tasks, balanced against maximizing the amount of necessary data that is delivered to other artists and managers downstream. And management needs to convince the artists that this will be the case and that their workflow will contain no unnecessary steps.

To achieve this balance, it is useful to look at what benefits come from a personalized workflow and try to replicate them as much as possible in the studio workflow. Individual artists perceive the following as major benefits of their individual workflows:

•   Knows where all the files are located

•   Knows what all the objects and parameters in the file(s) are named

•   Doesn’t need to waste time handing off data to other artists

•   Doesn’t need to waste time learning anything new

The first two are easily replicated in a studio workflow by creating a system of well-defined locations for data that is organized by production, scene, shot, and task, and by developing standards for scene graph organization, including naming of objects and parameters. The latter two are necessary parts of working in a large organization, and what needs to be done is to structure the workflow into departments that maximize the amount of work an individual can accomplish according to the skills of the artists, balanced against being able to work in parallel as much as possible.

To best facilitate working in parallel, decomposition of conceptual assets such as models and shots into files that represent tasks, not just tools, is essential. This allows artists whose work is independent to work at the same time and those who are dependent on each other to iteratively refine in parallel (one artist checkpoints, the other begins work and takes it as far as he or she can until the previous artist updates his or her work—and so on). For example, even if the entire system is based in Maya, splitting a shot into a environment.ma file, characters.ma file, lights.ma file, and fx.ma file and referencing them into the shot .ma file already allows for four people to work simultaneously, as opposed to only one. Once the set is roughed in, and characters given blocking animation, the lighters and visual effects artists can begin their work and make refinements as the environments and characters teams refine their work. Within each department task the workflow may resemble an individualized workflow, in that the artist works solo until delivering the appropriate file into the system for other artists to work from, but between departments it is essential to do the following:

•   Specify well-defined asset locations.

   Use a file structure that represents logical workflow assets such as models and shots, not just file formats.

   Use well-defined naming conventions and structures inside each tool.

   Make delivery of files into the proper global location in the system easy, so the artist doesn’t waste time on handoffs.

•   Facilitate working in parallel.

   Decompose logical assets into files/sets of files that are relevant to workflow departments, not just separated by the tool that reads and writes the file.

   Allow artists to work locally, without being impacted by others’ changes, until they reconcile with the global asset system.

•   Keep the individual artist workflow intimate.

   Build on well-known tools (or replicate wherever possible well-known tool concepts if the tool is being built in house).

   Specify scripts and plug-ins that interface with the asset system within the primary tools, so checkouts, check-ins, and hand-offs are well integrated into the artists’ existing workflows.

Another requirement in getting individual artists to work efficiently in a group is good communications tools and techniques. Even with the best standards and systems, there will be cases where artists simply need to talk to each other and agree on how something new and different will fit into the system. It is not difficult to add new asset types into a well-defined structure, provided that flexibility is a part of the requirements. However, the mechanisms must be in place for artists and tool builders/maintainers to communicate about these evolving requirements during a production. Requirements that certain data about assets such as shots and models be deployed in human-readable formats can help not only with emergency production hacks, but also in enabling communications about the assets. Designing communication tools into the system, such as a task and fixes allocation and tracking system, notifications about changes to which one can subscribe, and a chat system, is also helpful.

Ultimately, if the studio workflow is sufficiently well defined; is easily understandable; facilitates parallel workflows, iteration, and communication; and can be flexed when needed, artists will not miss working in an individualized workflow for very long.

Service Bureau versus In-House Requirements

A studio that operates as a service bureau, such as a visual effects house providing effects for production companies, has certain requirements that a shop that is either doing feature animation or is an in-house visual effects provider at a large studio does not. All of the general principles and goals of performing the requirements analysis apply, but the external partners must be taken into account as stakeholders in the analysis.

Service bureaus need to be able to input data from third parties, and output it in a readable format as well. File interchange formats are discussed in detail later in this chapter, but an added general principle of analysis for a service bureau is that the input and output points of the pipeline must be specified. If all that is imported are plates from live-action shoots, then what image format(s) will be accepted depends on the needs of the clients. However, if digital scene files, edits, composites, etc., must also be input from other studios, the requirements for each variant of each of these types of inputs must be specified. Output requirements are similar: The downstream clients’ needs must be analyzed and what image and data will be delivered must be determined.

Formats designed specifically for interchange, such as the Autodesk cross-package interoperability format filmbox (.fbx), make the implementation task easier, but the requirements analysis cannot be considered complete based on the decision to use .fbx. Issues such as potentially different plug-ins, shading libraries, etc., make the interchange process complex. Both requirements for ways to check and validate this data and also data delivery checklists to share with vendors and clients on both ends of the workflow must be able to be specified.

Because of facilities’ desires to protect IP in the form of plugins, libraries, etc., there can be some difficulties when data other than image plates needs to be shared—but given the frequency with which multiple facilities work on a single show, being able to provide a comprehensible and comprehensive list of requested data to the other party and depending on the facility being able to check its validity and have mechanisms to deal with missing data (even if it means re-creating it within the facility) are essential. The requirements analysis must reflect these issues.

While a more formal treatment of requirements analysis can result in more deliverables, at the end of the analysis the minimum set of deliverables that can get the workflow development project going consists of the following:

•   a high-level workflow overview that explains the entire end-to-end workflow in an understandable manner and allows everyone involved to know the overarching goals of the system;

•   a requirements specification for each department workflow, including interchange with the departments before and after it in the workflow; and

•   a requirements specification for each task within the department, including the interchange with either other artists or other components of a single-artist process.

At a minimum, each requirement should contain the information detailed earlier in this section: definition, users, overview, scenario, input, functionality, usability, and output. The requirements should be collected into a single volume of workflow requirements that, taken as a whole, define the end-to-end workflow of the studio. Whereas the detailed implementation designs of the systems may get a lot more complex, the requirements analysis documents should be kept as simple as possible while still fully describing the requisite features.

These deliverables are passed along to the following:

•   Stakeholders: for review, and iterative refinement alongside the analysis team.

•   Designers: as the input to their workflow design process.

•   Implementers: as a guideline for understanding the design documents, because the requirements provide a concise, goal-oriented description of each step that can be obscured within a thorough design packet.

•   Users: as a conceptual-level user’s guide to understanding what the system is expected to do for them, which they can refer back to when communicating to the development team any problems they run into with a tool or procedure not delivering on its promise.

Over the years, as studios grow, develop new techniques and tools, and change the workflow to accommodate new ideas, these documents can be revised in order to help readers understand how the changes integrate with, or replace, previous parts of the workflow. It helps with understanding the scope and impact of changes before they’re made, and serves as an ongoing guide to check tools and procedures against to make sure they’re functioning as required.

DESIGN OF A PRODUCTION WORKFLOW

Dan Rosen

The genesis of tools and user documentation should allow for solid communication and reasonable speed in executing tasks. It is critical for the tools and documentation to be intuitive and user friendly. The design of a workflow comes not only from analysis but also experience. Keeping the design of a workflow open enough to easily integrate changes is an art unto itself. Taking in feedback and continued review of the workflow should spawn upgrades to the process, and continued revisions of the workflow should make it easier to work faster and concentrate on the creative tasks at hand.

From Analysis to Design

Taking a page from the history of architecture, design is inherently about form following function. The form of a workflow should help optimize time spent on creating images and reduce time spent on redundant tasks. An artist may need to seek information about a client note, on-set camera data, or publish an image for supervisor review. A producer may need to input a schedule and track artists’ progress. The workflow should be designed to allow different departments to input data, read data, and contain tools to create and review images, all with an intuitive ease of use.

The design of the workflow should respect the majority of users. Whether supervisors, artists, or production staff, a common language must be spoken. Ultimately all people in the organization should have a basic understanding of each department’s workflow. In many cases departments share some of the same tools when viewing and tracking data and images. It is important to design a workflow with the entire organization in mind and further break down departmental workflows that can all plug in together.

Speaking the same language can be achieved through experience and time spent with the same people working side by side, but the workflow can and should allow for integrating new people into the team as well as new goals and tasks. There is a common language in computer science that tends to apply to visual effects and, more and more, to everyone who has used a computer.

Change and growth of the workflow may also be tailored to new standards formed every day. Taking another creative tact, design is much like writing, where good writing is rewriting. A workflow will continue to grow. If the design and architecture are good, then it will adapt to scale and contingencies.

The main function defined during analysis, that is, to make a movie, may be broken down into overall workflow, departmental workflow, and, finally, artist workflow. It comes down to relatively basic to-do lists that anyone can understand.

The end goal of deliverables is the tangible result of the efforts put into workflow design. Ultimately the workflow, or to-do list, is designed and executed via functional tools, documentation, and training. Examples might include:

•   customized artist applications and plug-ins;

•   a database with viewing and entry interfaces to track visual reference, assets, tasks, status, schedule, internal notes, client comments, etc.;

•   tools within artist applications to search, replace, and add assets to the workflow tied into the database;

•   tools within artist applications to utilize a shot/sequence-specific setups and/or company-wide macros or plug-ins to achieve a consistent look, technique, color-space, file-format, etc., tied into the database;

•   flip-book applications to view visual references, individual elements, shots, or even surrounding shots in editorial context;

•   documentation through web or other digital formats, including images and video; and

•   training in a formal classroom setting, or in smaller breakdown meetings/turnovers, or even one-on-one interactions.

It has become increasingly easy to use open source and off-the-shelf software to design the workflow. Even a relatively small visual effects studio can create a database that acts as the backbone of tracking data and assets and also contains documentation and training materials. It is also entirely possible to use off-the-shelf software, in combination with smaller scripts and applications, to manipulate those tools to interact with a database and integrate the company’s workflow into the artist staff’s and production staff’s day-to-day tasks.

Regardless of whether the workflow design is created from entirely customized software or off-the-shelf software, speaking the same language becomes very important. Software user-interface (UI) designs are standardized in terms of key commands and processes. It has become a given that most people use computers and schools are training with particular software and methodologies that can lead to these standards. This allows for new artists and production staff to pick up a tool and have a better chance of finding the correct menu, button, or hot key. An easy example is cut-and-paste; the use of the “Control + C” and “Control + V” keys to cut and paste are almost universal in all applications and tools. A more complex common UI control may be using the arrow keys to forward frame by frame, or the “F” key to frame or fit the image or selection within the main UI window. Even more complex are the UI standards for 3D movement including things like “Option + Left-Click” to orbit or “Option + Middle-Click” to pan.

In Western culture, books are read left to right and top to bottom. Therefore, the layout, size, and legibility of menus, shelves, trays, buttons, drop-downs, pop-ups, etc., are all important considerations when designing any UI. Intuitiveness in software cannot be underestimated, and the better it is the more productive the staff will be. A well-designed workflow and pipeline means that everyone can understand the workflow and its tools and find it user friendly.

Workflow tools can enhance every part of the overall processes. The goal is to continually improve communication and increase speed and efficiency while ushering images through the workflow. The tools should remain as transparent as possible. This may be achieved by keeping the tools intuitive and integrating changes seamlessly.

Designing to Scale

Designing the workflow to scale goes back to understanding the main objective, in this example, to make a movie. Even though the workflow has been considered hierarchically (per departmental workflows, artist workflows, and task workflows), the design of the overall workflow should reflect the bigger picture. In an organization that respects openness, it is important for all members to understand the bigger picture.

It is common to break down the task of making a movie, or any narrative project, from the greater whole to sequences and then individual shots. Designing a workflow around the components of the project makes the end goal of delivering individual, final images scalable. In most cases many departments contribute to particular sequences and shots. Therefore, a sequence and its subsequent shots are at the highest level of organization.

The most common directory structures for visual effects and animation are broken down by show > sequence > shot. From there, the effect can then be broken into task or department, for example, modeling, rigging, texturing, effects, lighting, or compositing. Regardless of the task, however, the association of each asset or element is its place in a shot or sequence and therefore in the project.

The process of making a movie or project grows and shrinks based on directorial and editorial decisions. This has a direct impact on the workflow and all of the tools. The tools have to account for adding or omitting shots and sequences. It is also important to account for directorial and artistic changes that directly impact the workflow. For instance, a workflow to create fully 3D environments may be switched to matte-painted backgrounds for artistic, casting staff, render power, or even budgetary reasons. The workflow has to be able to adapt to bringing the new matte-painted backgrounds into shots with the correct camera move and be integrated into the scene with the rest of its elements in a whole new way.

Whether painting a background, animating a character, or tracking the status of an asset, the workflow comes back to a given shot in a sequence in the greater whole, the movie. Each task down the pipeline of a given shot has to keep the input assets up-to-date and contribute new output of assets and data back into the pipeline. The workflow tools should be designed to make the information for each user easy to obtain and update.

Designing for Contingencies

A big part of production is to anticipate problems. Designing for contingencies reduces the chance of having to change the basic architecture of the workflow. Growing a visual effects or animation studio for many years will bring a level of sophistication to workflow, but it is also important, if not an art, to know when to adapt to new formats and methodologies versus holding on to well-formed standards. Migrating and testing new components of workflow as well as phasing out components are extremely sensitive tasks. Tools should allow for legacy operations, especially when updates are pushed out in the middle of an active production. Following up by weaning from the old to the new tools requires interdepartmental sign-off. The workflow should allow for the systematic implementation of change.

Redundancies can provide multiple checks to assets being pushed through the pipeline. It is possible to have tools that can check and validate assets in the most basic sense, but this may also come down to reviewing assets throughout the workflow followed by earmarking a database with those findings.

Making images for a film is an iterative process, so it is extremely important to track versions of each part of the workflow down to the asset level. Naming conventions and asset tracking are important internally and possibly even more so when collaborating with other vendors and delivering images for digital intermediate or directly for film-out. Designing a workflow for contingencies in naming and versioning assets is extremely important. It is also ideal to have the tools make this part of the process as transparent as possible to artists and production staff while retaining solid methods.

A real-world example of a naming convention will show how it reflects its place in the bigger hierarchy, possibly a brief description and a version delimited by something such as an underscore:

•   <shot>_<role>_<description>_<version>

•   dr0010_fx_debris_v001.1001.exr

•   dr0010_fx_debris_v001.ma

Here the asset is defined by an abbreviation of the scene in the project and four padded shot numbers, followed by an abbreviation of the department or role that created it, then a brief description, and finally a three-padded version number. Any person in the organization should be able to quickly, and generally, understand what this asset is associated with. The shot name is a unique moniker to the sequence and the show assigned specifically by visual effects. The department or role that creates effects (debris, rain, sparks, etc.) may be called fx. A brief description says as much as possible about what the asset is in a concise manner. And finally the version tracks iterations created by the artist.

The naming convention example shows that the EXR (.exr) image asset is named directly from the Maya (.ma) scene itself to help tie the relationship of the two assets together. Tools and software can also track associations of what images were generated from what scenes as well as by whom it was rendered, on what date, from which particular approved camera, and so on. These tools and software can also set up the initial scene itself and the naming of the various passes and images that it will render. It is also possible to embed metadata within assets to link associations with the rest of the workflow. This embedded data may tie in to a database and provide information to the user through workflow tools.

The naming convention can get extremely complex given all of the data that could be included with it, but it is practical to keep the naming convention to a reasonable length and maximize tools that track and associate assets. If a client’s desired naming conventions require variations from the internal conventions, then it is possible to build tools that can rename assets and even alter formats upon delivery. It is important to keep internal naming conventions and formats stable.

The client might request the script scene number, but may not be concerned with the department that created it, nor the description. It would be most common to deliver a final composite shot, in this example, so the department and description are not as important at this point in the process. The scene number comes from the script itself and relates any given shot to the entire project chronologically. This is particularly helpful for the director, editorial, digital intermediate, and anyone who has to place a given shot within the bigger picture. Here is an example of a client-delivery naming convention; the added script scene number, stripped down version of the naming, two-padded versioning, and DPX file format:

•   <scene>_<shot>_<version>

•   119_DR0010_v01.1001.dpx

It is a good idea not to rely solely on databases. It is important to validate that the data is also reflected on disk. This can be achieved through tools that validate image sequences, sizes, bit depth, etc., but nothing can replace some real human interaction. In traditional cel animation there are checkers whose sole job is to thumb through drawings and dope sheets to ensure that all of the drawings that are expected to be there are in fact present and numbered, etc., before going off to the next part of the workflow. Most facilities create a specific component of the workflow for both the input/output (IO) and editorial departments. The tools built for these departments should allow for the aforementioned image validation as well as delivery and notifications.

One should be able to easily add a great number of image formats and file types within the conceptual types so that new tools are easy to deploy rapidly. For instance, scanning film to the Cineon file format, in a standard Kodak logarithmic color space, at a resolution of 2048 × 1556 pixels can be considered a standard of sorts, but there are many variations closely related and a great number of high-definition (HD) color spaces, file formats, and resolutions continue to be developed. There are also 4k and IMAX resolutions to consider because this is a current trend for stereoscopic films. This is a great example of the potential impact on workflows that may have been initiated before this trend and now need adaptation to change the process. The workflow has to allow for change, whether setting up matchmove cameras, validating film-backs, or converting color space into traditional and new formats.

Design Deliverables

As a result of analysis and design, the ultimate goal is to release functioning tools and training with accurate and useful documentation.

These tools must be tested and deemed functioning through basic alpha and beta software testing methodologies. These tools must also be maintained through versioning based on bug fixes, integrating user comments, and change based on new standards and formats.

Training is extremely valuable. Some smaller companies may not have the time and money to spend on formal classroom training. However, informal training can still be effective. Some facilities use mentoring as a means of on-the-job training. A new hire may be assigned a more experienced mentor to guide him or her through the initial tasks and experience with the workflow. Other means of informal training can come from smaller, breakdown meetings where learning happens on the job. It is also probable that a tip or trick that one person finds may prove incredibly useful and be shared via e-mail, chat rooms, or even hallway conversations. No matter how perfect the analysis and design, real-world use is the best form of testing and even training. Learning on real tasks is how the majority of people not only learn but also retain information. If those findings can make their way back into the formal workflow, then the process grows.

Documentation is extremely useful to allow users to reference components of the workflow. All people learn and retain in different ways. Keeping accurate, easy-to-find and -read documentation can be critical to the workflow. Many forms of digital documents are available to achieve this goal; simple HTML, more complex web code such as PHP with SQL databasing, Wiki, snapshot images and video, etc. Upkeep of documentation is integral to the release and maintenance of any components of the workflow.

Solid design of the tools and documentation of the workflow should continually increase communication and speed of redundant tasks. It can also contribute to avoiding mistakes, making better looking images, increasing continuity, and maximizing the potential of everyone in the organization.

DEPLOYING A PRODUCTION WORKFLOW

Stephan Vladimir Bugaj

From Design to Implementation

Moving from design to implementation is a matter of planning, budgeting, testing, and then deployment. All analysis and design will now take shape in the form of a pipeline: a workflow, and the hardware and software that implement it. A number of decisions must be made in this phase, but if a studio has done a good job of developing specifications and design documents and has the resources to build a system that meets the requirements, whatever gets deployed should be well suited to its needs and sufficiently flexible to grow with the studio. This is necessary to avoid being stuck in a situation where tool selection dictates the capabilities of the studio, rather than vice versa.

To meet the requirements, the implementers must deal with the following during each step of development:

•   Can the studio afford to purchase or build the planned component and, if not, is there an affordable alternative that does all of the essential work? If not, revisit the requirements and design and see if it’s possible to modify the requirements to meet the budget.

   Account for time. Money is not the only factor here, it is also necessary to ask how long will this take to deploy, and is that acceptable?

   Account for hidden resource costs such as CPU usage, disk space, and so on through testing.

   Account for support staff costs.

•   Does the component the studio is about to purchase or build meet the requirements?

   Does its functionality meet the functional specifications? If not, unless there is absolutely no other choice due to resource constraints, choose a different implementation.

   Does it follow the design exactly and, if not, is the variation either an acceptable shortfall or a gain? If it is possible to accept the variation, make sure that change is reflected in the requirements and design so others are made aware of it.

   Is this component sufficiently extensible/flexible that it can grow with the studio?

   Can the artists use it efficiently and effectively?

Implementation should attempt to stick to the requirements and design specifications as closely as possible. Whenever variations must be made, the best solution is to make amendments to the design and requirements documentation to reflect the original goal, the modification, and the reason for it. This not only gives users an understanding of how their system works, but also allows future growth projects to revisit these issues. Often what was an insurmountable obstacle at one point becomes easy to achieve a few years later. If the studio maintains the planning documents even after implementation, it is possible to proactively stay on top of changing technologies and artists’ needs while saving a lot of time and money over the long run.

To Build or Purchase?

One very important question that comes up when implementing a workflow is when to build and when to purchase—and what to purchase when buying. Cost analyses performed when making these decisions seem to inevitably fall short of reality, mainly because support costs are often ignored, as is the reality that even with commercial software all but the smallest shops will still need to do some custom development.

In some areas the off-the-shelf solutions, such as modeling, compositing, editing, and 3D paint, shader development, and simulation, are so robust that the thought of building one is ridiculous except for the largest studios with the resources to push the cutting edge of technology. Full 3D packages such as Maya, 3ds Max, Softimage XSI, and Houdini; sculpt and paint tools like Mudbox and Zbrush; and compositing software like Nuke, Fusion, and Shake also have the advantage that artists will arrive already knowing how to use them. Familiarity and low start-up costs make it inevitable that one or more commercial 3D, paint, editing, and compositing packages will play a critical role in any pipeline—this is true even at the large studios where tens of millions of R&D dollars are spent annually.

Some of the larger commercial packages have dozens of programmers constantly working to improve them to give them the edge against competing packages. An in-house solution may be quickly eclipsed by a commercial package, unless the studio can afford a Pixar or ILM sized R&D team of more than a hundred people. Lacking the ability to compete with the commercial developers in terms of scale, a studio that chooses in-house development may be stuck with an outdated system or the expense of switching. Training artists on a custom package comes with significant cost and time factors, which the likes of Pixar or ILM can easily absorb, but which may not be viable for a smaller studio. In-house software requires all new employees to take training classes. Existing packages have an existing user base, and are also are taught in schools and through numerous books. Even with all of their resources, industry leaders like Pixar and ILM still use a mixture of in-house and off-the-shelf packages.

Two major reasons in-house development continues is the drive for innovation and the fact that few of the off-the-shelf packages were developed with a simultaneous multiuser workflow in mind. Their facilities for file referencing are often buggy, and none of them natively supports the idea of a shot being composed of a large number of components. Instead, most of the commercial packages were written for single-user workflows where all data for a shot exists in a single file. Getting these packages to function correctly in a custom production workflow where multiple artists are doing different things on and to a shot or model in parallel can be tricky—and it requires building. Even midsize shops will find it necessary to do some R&D in order to facilitate a true multiuser, multidepartment workflow that facilitates simultaneous work on an asset by more than one user at a time. Whether this development is done entirely in house or based on a commercial package, it is important that end-user artists be very involved in development because software engineers may try to solve nonexistent problems and miss important ones.

None of the commercial packages encapsulates an entire production workflow optimally, even after installing a number of third-party plug-ins. Furthermore, none of the existing data interchange formats, even .fbx, natively supports all possible production data. Even if studios that don’t intend to develop in-house plug-ins, shader library code, simulation engines, or other proprietary asset generation codes, they will still need one or more programmers. Glue code2 will need to be written to move data around the pipeline, including interfacing with an asset tracking database. It’s necessary to account for in-house development and support when building a production pipeline, regardless of what grand promises vendors may make.

Because no single package is sufficiently good at every part of a full CG pipeline, end-to-end modeling through film-out pipelines is generally based entirely on commercial packages linked together with glue code to facilitate data passing and management—except for large studios, with large R&D budgets. A willingness to undertake this development makes a studio more able to choose which package to use for which parts of the pipeline based on its relative strengths—even within the 3D portion of the pipeline; for example, modeling and animation in 3ds Max, effects through rendering in Houdini, and compositing in Nuke. Even single-person shops are likely to end up working with more than one package, unless that shop only provides service in a single part of the production pipeline (such as a compositing-only operation).

Once a studio moves beyond the basics, the necessity of writing plug-ins, custom shading libraries and templates, better articulation components, and other code that does more than just move data around between packages and talk to databases quickly becomes obvious. For example, no unmodified commercial package provides a particularly robust lighting tool. Commercial packages provide a lot of value, but they all need expert users and developers customizing and extending them in order to achieve their full potential. Knowing when to build and when to purchase is a matter of analyzing the production needs and deciding when to put money toward purchasing components versus when to hire talented R&D people to take the system to the next level. However, any studio that attempts to deploy a multiuser pipeline with support for parallel tasking and asset tracking without a good developer on staff will quickly find itself in trouble.

Platforms, Packages, and Other Religions

During deployment, it is likely that employees will voice strong opinions about hardware and operating system platforms, production software packages, programming languages, and so on. Sometimes, these opinions can border on the fanatical. However, when deploying any system it is necessary to take into account the requirements, the full life-cycle costs, and the usability of the system. Full life-cycle costs include not only development and deployment costs, but also the support costs involved in administering the systems, including core IT infrastructure in addition to application maintenance. What is best is relative to the particulars of a given studio and is something that is affordable, sufficiently familiar to the artists (and/or that they can learn quickly), and, most importantly, meets the specified requirements.

Proponents of Linux point to its low entry cost and flexibility, whereas proponents of Windows tout its familiarity and availability of commercial packages. Both sides say that the other system requires a great deal of system administration maintenance to perform reliably and optimally—and both are correct. In a production situation, hiring expert systems administrators is far more important than deciding which platform is chosen. No platform (not even OSX) is so reliable and so optimized out of the box that it can be expected to perform perfectly without support. Depending on what the requirements are, Linux, OSX, Windows, or a combination thereof may be what is needed. Each has strengths and weaknesses. OSX has a plethora of both commercial and free software available on it and supports symlinks and a full shell environment (both important to large, TD3-heavy workflows), but is not as cheap, flexible, or tunable (or as efficient out of the box) as Linux. Windows lacks many of those virtues, but on the other hand is familiar to more users, and a lot of software packages are available for it. No platform is better in all situations than the other.

Packages are amenable to similar comparisons. For example, Houdini is fantastic for simulation, effects, shading, and rendering, and it is amazingly customizable and extensible; Modo is a wonderful modeler; and XSI has industry leading animation tools. On the other hand, Max and Maya are decent across the board and are also very familiar to many users. However, none of these packages fully implements an entire production pipeline optimally, even though many of them implement large portions of the pipeline. Perhaps in future revisions one of the major packages will provide a complete pipeline solution out of the box.

Any combination of platform(s) and package(s) can serve as a solid foundation for a pipeline given the right people developing for and using them. Again, requirements and costs will dictate which one or more of these packages are deployed. While a strong team is the most important asset at any studio, their platform religion should not necessarily dictate deployment choices. Artist input is critical in any such decision, but the big picture of end-to-end pipeline development must also be kept in mind. Making a solid requirements analysis and design plan, and showing people that the choices being considered will meet those needs, can convince even the biggest platform zealots that it is reasonable to develop a system that may not be entirely (or at all) based on their favorite toolkit.

Test Now, Save Later

Testing during development is essential. Software engineering calls for unit tests (functioning of a single component) and white box (internal structure) and black box (input/output correctness) tests, which are indeed crucial to successful deployment of build-it-yourself components. However, target users should test all aspects of a system—in-house or off-the-shelf. Creating interim checkpoints and deliverables will facilitate discovery of problems early, before they become very costly. Regardless of the number of formal tests or design and code reviews employed, ultimately the only way to know if an implementation is meeting the specified requirements is to let the users say whether or not it is. Interim user checkpoints should include the following:

•   requirements and design reviews, where the users comment on the in-progress analysis and design;

•   feature checkpoints, where advanced users test the core functionality, even if the user interface (UI) is not necessarily up to par;

•   component checkpoints, where a feature and its associated UI elements are tested, even when the rest of the system isn’t ready;

•   alpha testing, where advanced users, using realistic data, test the earliest stages of a full system;

•   beta testing, where a candidate for release is tested by a broad group of users, including its integration into whatever part of the pipeline has already been deployed.

Testing is often seen as an impediment to deployment, a speed bump in a timely delivery. However, testing copiously early in the development and deployment phase will reduce the long-term costs associated with fixing errors (both in the system and those a broken system creates in production assets) and the lost production time associated with a faulty system.

Development Deliverables

The end game of the beta testing phase of development and deployment is preparing for rollout and getting buy-in from the artists, managers, etc., who will be using the system in production. A system should not be released into production until most of the users are happy with its functionality and usability. Doing so otherwise is a waste of time and money. It is never cost effective to support and repair a system that decreases efficiency rather than increases it. This situation also leads to mistrust and resentment on the part of the production artists, making the ongoing job of the workflow team much more difficult.

In preparation for rollout, a beta should be promoted to a release candidate and a final round of comments solicited. All of the formal tests should be run on the release candidate. While this process comes from software engineering, it also applies to off-the-shelf solutions as well. Test the suitability of all software to meet the requirements and to reliably produce correct data and a viable user experience before deploying it.

Buy-in refers to getting enough users to approve of the release candidate as a sufficient tool for doing their job that they are willing to put it into production. Note that, by this point, a rollout plan needs to be in place. The rollout plan should provide for minimal downtime for the target users. This includes arranging to translate any data that needs format changes to work in the new version of the tool before the switch is made. Immediate follow-up with users and ongoing support should also be part of the rollout plan. Typically this is heavy in the first few days and ramps down to normal afterward.

During this final phase of development, the following occurs:

•   A release candidate is approved.

•   After release candidate approval, provide users with a rollout plan, guide, and instructions. The rollout plan should include clear instructions for users on how to get immediate follow-up support as well as the ongoing support plans for the system.

•   After rollout plan approval, but before the actual rollout, update any data, protocols, documentation, and so on that may be affected by the new or updated system.

•   Install the new system on the users’ computers.

•   Test the installations on a sufficient number of users’ computers in order to be confident the rollout has been generally successful.

•   Provide immediate follow-up support in the form of a team that proactively circulates among the users (at least contacting them by e-mail, if nothing else) and offers assistance with, and solicits feedback on, the new system.

•   Provide an ongoing user support apparatus.

These general principles of deployment will help create robust production workflow systems, regardless of which parts of the system are purchased and which are built. By taking the time to do all of the analysis, design, planning, and testing suggested herein, a studio will save time and money and have a more effective production pipeline. Return on this investment comes more quickly than people often think because the impact of a well-conceived, well-documented, and well-developed system on employee morale and efficiency is substantial. When a system is deployed that is reliable and meets its requirements from the onset, the support burden is reduced. This allows the studio to spend more time and money producing great visual effects and/or animation work and the workflow software team to spend more time developing ahead of the curve with advanced features rather than combating the effects of rushed, haphazard pipeline development.

INFRASTRUCTURE

Dan Novy

Infrastructure topologies among visual effects facilities are as varied as the number of facilities themselves. Using a best practices approach reveals a few simple paradigms. Facilities are designed along standard TCP/IP switching laid out in a client/server relationship, much as any large enterprise class computing operation would be. In general, a large, centralized data center for storage and computation is connected to branched, managed, or unmanaged workgroup switches. Artists, developers, and administrators have access to a unified data structure, with user and group permissions determining access and editing rights. As with other enterprise class computing systems, visual effects computing requires high-availability (HA) servers and networks. HA servers are defined as redundant, status-aware servers and software monitoring systems deployed to ensure that users have absolute access to the data structure regardless of power, network, or hardware issues. HA servers may be mirrored or clustered similar to RAID storage systems and are capable of load balancing as well as fail-over operations—in which backup servers come online without administrative intervention should the main server become inaccessible.

In addition to HA data service, facilities must deploy a robust backup system, timed incrementally, located locally, and ideally remotely as well, as part of a designed and tested disaster recovery system. Real-time backups are generally achieved through the clustering action of the HA servers, but incremental backups, done on a timed schedule, such as hourly, half day, daily, and nightly, should also be deployed. This allows the restoration of data down to the block level in all situations from a simple ill-timed file write or corrupted file system up to, and including, facility destruction.

Current enterprise class computing networking is a mix of gigabit or 10-gigabit over CAT6 or CAT6a, Fibre Channel, PCI Express, Serial ATA, or InfiniBand. Depending on the topology, switching is either hierarchical as with Ethernet or switched fabric.

Several visual effects facility-specific characteristics that explain network deployment and administration for enterprise class computing follow. Note, however, that this is not a comprehensive examination, nor is one possible, because each facility provides its own unique challenges and requirements.

1. Ingestion. Ingestion is defined as the place and process by which background plate material, scanned elements, or other digital assets enter the facility, are ideally logged, and become available to administration and artists for use. Pure animation, creating all of its own elements internally, would still benefit from a planned ingestion process because the artists may be receiving materials from other animation facilities. The hardware involved may vary from tape media to portable hard drive enclosures employing IEEE 1394 or USB buses to high-density optical media.

2. Delivery. Delivery is defined as the point and the process when anything from daily iterations to final shots leave the facility to be critiqued or scanned to film or made ready for final distribution. Again, media vary according to speed, time, and cost.

3. Archiving. Regardless of the facilities’ individual choice of media for ingestion and delivery, once a shot has been finaled, it is usually the contractual obligation of the facility to archive all elements and digital assets used in the production of the shot. Archiving is a continually growing and changing concern as data sets become larger and new storage mediums evolve.

TRACKING ASSETS

Stephan Vladimir Bugaj

What Is Task and Asset Tracking?

Once a production team grows beyond one or two people, or clients become involved, some form of production tracking becomes essential. Both internal facility producers and outside clients need incremental updates on progress. They also need to be able to pull up elements from their production (for verification, legal clearance, reuse, etc.) at a moment’s notice. Without some kind of production tracking, it is difficult to ascertain if the project is staying on time and on budget—until it is too late. When jobs do go over bid, production tracking can help determine why. Although this is not the most interesting or glamorous aspect of production, it is crucial to the viability of all productions.

Task planning and tracking are the processes of breaking down a job into tasks, estimating how long each task will take, and then tracking the progress to make sure the project is staying on time and on budget. For producers, planning information turns into bids and budgets. Tracking is then essential to see how the bid will be adhered to and will aid the production to take corrective measures before it gets out of hand. Production department heads can use tracking data not only to see how their own team is doing but also to see how their predecessor departments are doing in terms of delivering work to their team.

Each task in the pipeline should roughly equate to the work done by one artist at each stage in the pipeline. Often there will be more than one task per department, depending on artist specialization, such as separate technical shading and texture painting tasks within the texturing department. The primary asset associated with a task is generally either a shot or a model. Dealing with assets below the model or shot level for task planning and tracking is generally more expensive and time consuming than it is worth (and almost always incorrect). It is necessary to track these at the asset tracking level, but for task management it is only crucial to associate the task with a specific shot, model, or other reusable model-like object, such as an articulation rig, script, lighting rig, etc., that isn’t specific to a single shot or model.

Tasks that may need to be tracked in a production are as follows:

•   Production design (either per model or per shot):

   model design illustrations,

   shader callouts and material reference paintings and photos,

   dressing plans and sketches, and

   lighting keys and references.

•   Modeling, rigging, and shading (per model):

   base geometry sculpt,

   articulation and procedurals,

   shader definitions and binding, and

   texture painting and/or detailing sculpt in a tool like Mudbox.

•   Layout and set dressing (per shot):

   camera definition (framing and motion) and

   set dressing.

•   Animation and simulation (per shot):

   animation (usually tracked separately for character animation and visual effects animation departments, and often split up per character or visual effects element in the shot) and

   simulation (usually tracked separately for character cloth and hair and visual effects simulation departments).

•   Lighting, rendering, and compositing (per shot):

   master lighting rig definitions are tracked separately, more like models are tracked, as they’re reused across shots,

   shot lighting,

   rendering, and

   final composite.

This may seem straightforward, and in many ways it is. One hitch is that often each task needs multiple task stages or phases to support parallel iterative refinement, so artists may work from rough to fine. For example, animation might have blocking, key, and polish stages, whereas modeling might have rough, sculpted, and detailed stages. Given preliminary data from animation, a lighting team knows to start master shot lighting after blocking is finished and a shading team knows to start basic shading after a rough sculpt is completed.

Two other elements to task track are global assets and fix requests. Global assets are things like scripts and plug-ins that are not specific to a model or shot. These still need to be tracked and generally can be tracked in the same system as models. After all, models are somewhat like global assets except that they appear on-camera. This means models (and model-like assets such as lighting rigs and articulation rigs) can be associated with one or more shots for the purpose of knowing when to release shots into further stages of production based on model dependencies— their main difference from other global assets.

Fix requests are reports back to a department that has already marked the asset as finished. This is done per stage, so a fix can be generated against the model and, whether it is in the rough, sculpted, or detailed stage at the time of the request, should get flagged in the tracking system for fixes. This is both a communication system between departments and a way for producers to track problems with assets that may be holding up crucial footage.

When associating assets with tasks, usually a top-level file (such as a Maya .ma file) or a virtual asset4 is used to stand in for the potentially many on-disk assets. So, to associate modeling, rigging, and shading tasks with a model, they are tied to the record of the model’s scene file in the primary package or to the virtual definition of that model. All of the other files would be indirectly associated through their association with the model in the asset tracking system, but their state wouldn’t directly matter to the task. When tracking assets and tasks at a lower level, it is still not recommended to track down to the lowest file levels. Geometry might be tracked through a Maya or 3ds Max file and shading through a main Slim palette or through database definitions of “model, geometry” and “model, shading”—but associating every file reference in those top-level files directly with the task would be overkill and potentially misleading, particularly given shared resources that may be referenced. It is too inflexible to validate completion of a task this way and leads to people checking in bogus files just to clear the validation checkpoint and declare the task done when they may legitimately have completed the task without all of the required files being accounted for (or they may have needed too many files), since even the same tasks vary in complexity quite a bit.

Asset tracking, on the other hand, requires only that files be associated with one or more models, shots, or other global assets. These associations are either by inclusion or by reference. By inclusion means that a file is part of the definition of that asset. It is stored in a directory structure rooted at the directory that defines the asset by virtue of its being a key field in a database definition and/or containing the top-level defining file of that asset. By reference means the file is part of the definition of one or more assets—and it is referenced into the asset through a referencing/linking system. References are generally tracked at the level of other assets, not individual files, in order to make the asset-tracking task tenable. Actual file-level tracking may be used as well, but highly technical artists and developers in extreme troubleshooting situations only refer to that complex relationship.

An example of which assets might need to be tracked can be seen in this definition of what subassets comprise a model’s surface (or surfacing or texturing or shading) asset:

•   Surface definition:

   root shader definitions and parameter values (Slim palette or otherwise):

–   referenced definitions (external palettes),

–   library shaders (these aren’t always tracked as part of the model-level definition, but could be, by reference),

–   direct-read texture images read in by the definitions or referenced definitions,

–   projection or tumble paint5 definitions file (if external to the package scene file), and

–   projection or tumble paint texture images.

In practice, this surface definition could range anywhere from a couple of files, for a purely procedural single shader on a simple model, to hundreds of files, mostly textures, for surfacing a very complex hero model. Tracking the database definition and/or the top-level asset file helps manage the complexity and tracks assets by what they are conceptually (surfacing) rather than by defining an assumed required set of files that may in practice vary in quantity and composition for every single distinct instance of the asset type.

Purchased, built, or both, a system ideally should provide:

•   Task tracking:

   Stages of completion to support parallel iterative refinement.

   A system for managing fixes as a special type of task.

•   Revision control at the asset level (similar to release packages in software revision management):

   Asset-level revision control is generally not for every file check in but rather on user request. This process should version all of the files in the asset at once, marking all the current file versions as part of this new asset version.

   Publication (aka release) of a new version of the asset is the process of taking the source files and installing them for use in the scene/shot editing environment. This should be a user-specified step and only be done when the artists feel the asset in the unpublished (source) area is ready to go live.

•   Revision control at the individual file level:

   Artists check in and out of these files (or subassets that are collections of individual files, but not a full asset) to allow for simultaneous multiuser work.

   File versions are kept for debugging purposes, but only file versions associated with a published asset version are relevant to the asset-level revision control system.

   Associations are created between file versions and tool versions to try to avoid editing a file with an incompatible version of the tool.

•   References into frame storage (and a way to version frame sequences, which will have to be set up in a render management system and then reflected in the tracking system):

   A system for maintaining old versions of frames for comparison, preferably with information about what versions of the assets created those frames.

•   Associations between tasks and conceptual/top-level assets:

   Multiple associations, so a task/fix can be related to one or more assets—this is especially useful for associating a model fix with one or more shots that are held up waiting for that fix.

•   A GUI that shows information about tasks and assets, provides a viewer for seeing associated renders, and provides handlers to open the asset into the appropriate tool from the tracking system.

Commercial Task and Asset Tracking Systems

Task planning and tracking are generally thought to be well understood, and many in production management use generic tools such as Microsoft Project for doing this. Although Project, or a similar tool, may be used successfully for broad-outline planning of a production, it is not well suited to production task tracking. Live-action budgeting and on-set production management tools from vendors such as Entertainment Partners and Jungle Software also are not perfectly suited to tracking production tasks in a visual effects or animation studio (although a VFX Supervisor on set may want to keep his or her own visual effects camera logs with traditional camera report specialty software or a spreadsheet).

Until recently, digital production studios were not the target market for any off-the-shelf production tracking software due to their being more of a niche industry within the film production market. Recently products such as Shotgun and VFX Showrunner have emerged, with the former targeted at animation houses and larger visual effects facilities and the latter more to boutique facilities. Either a piece of commercial software like this must match any given studio’s production process exactly, right out of the box, or it will need to be tailored to fit. Fortunately, most of the off-the-shelf software has some degree of built-in configurability in terms of defining departments, tasks, and stages in the pipeline. But, for a very customized, large, and/or complex pipeline, the package of choice also needs to support scripting and/or plug-ins so that it can be expanded later without relying on the vendor to do so (and charge for it). Even if the pipeline is reasonably straightforward, it may still become necessary at some point to customize the task-tracking tool with regard to associating tasks with data in the asset tracking system. The facilities for doing that in the various commercial task-tracking packages range in quality from fair to nonexistent, so there should be a plan from the start to do some customization.

Asset tracking, or digital asset management (DAM), deals with the actual asset files themselves. The artists use this capability more than management, and successful asset tracking systems contain elements of revision control systems, media databases, and bug tracking systems. Canto Cumulus and Avid Alienbrain are two major commercial digital asset management systems. Both are highly customizable, and this is essential in a DAM package because it is necessary to be able to store every kind of file asset in the system that needs to be versioned and associated with a model, shot, or other global asset and also able to be viewed/loaded.

Storing rendered frames and final composites can become very data intensive. While most DAMs handle image file formats natively and some now even support OpenEXR and other high-end formats, how they do this can make a big difference in performance. Scalability testing is recommended before purchasing a commercial system and storing rendered frames in it, and a high-speed network attached storage (NAS) system like NetApp or BlueArc is crucial. Large shops with huge render farms generally write frames directly to the NAS via a specialized render management system, like Pixar’s Alfred or an in-house tool, and only associate references to image sequences with the shots rather than trying to cram all of the image data into a DAM system that may not be able to handle it. The necessity of specialized in-house or high-end commercial tiff sequence playback tools to review uncompressed, color-managed 1, 2, 3, and/or 4k tiff sequences at 24 fps, due to the insufficiency of most movie file formats—which are generally both compressed and not frame accurate—is another reason why a lot of studios do not check the frames directly into their DAM system but use references instead. The slowdowns are just too great as most, if not all, DAMs cannot handle either the huge render farm loads or the playback speed requirements.

Building Task and Asset Tracking Systems

As with 2D and 3D production pipeline software in general, a studio will inevitably wind up developing at least some customizations to a commercial system, if not creating one from scratch. Once a studio workflow reaches a certain level of complexity, native capabilities of off-the-shelf systems are exceeded and it becomes essential to either extend it (if it is sufficiently flexible) or replace it with a custom system. Building a custom system can proceed from the starting point of a commercial system specialized for production purposes or generic tools like a commercial (or freeware) relational or object-oriented database system and revision control system.

Revision control systems and databases, from simple ones like RCS and MySQL to more sophisticated commercial systems like Perforce and Oracle, can also serve as the basis for a task and asset tracking system. Because the off-the-shelf market did not really serve the animation and visual effects communities for so many years, many studios use in-house asset and task tracking systems built on a relational database and a revision control system (with either proprietary GUIs in Qt, Tk, or wXwidgets or Web front ends for viewing tabular data, filling out forms, browsing assets by rendered image, and other front-end functions of a task and asset tracking system). As with commercial DAM systems, storing frame image data in a RDBMS6 like Oracle, MySQL, or Postgres is not a particularly good idea. A system for writing frames to a NAS and referencing them via the database is much more efficient.

Building a system from scratch can be expensive, but if a commercial system like Shotgun isn’t sufficient, it becomes necessary to do some development to expand on it and create the complete asset tracking system that meets requirements. In addition to the fact that it is inherently suited to a particular production workflow, a big advantage of planning to build a custom system from the onset, even if it’s based on a commercial system as a starting point, is that combined task tracking and asset tracking can be designed into the system from the beginning.

An example of a high-level design of a combined asset and task tracking system is as follows:

•   Task definitions:

   name of task.

   type of task (such as build or fix or a more complex type system if desired).

   designated department.

   task stage:

–   stage completion (such as unassigned, assigned, in progress, omitted, and completed).

   assigned artist.

   key (conceptual) asset(s) (model, shot, other global asset):

–   associated on-disk asset(s), such as the top-level file associated with the conceptual asset, such as a Maya, 3ds Max, Houdini or XSI scene file (note that for shot-related tasks, it’s also necessary to have paths to the rendered clips in the frame storage area) and

–   icon, which is just a thumbnail of a canonical “what is this” image such as a model render or a key frame from a shot, that is used in the UI to visually represent the asset(s) with which the task is associated.

   bid.

   due date.

   requester (who asked for this task to be done, in case the artist has follow-up questions).

   downstream department (who is expecting this task to be finished in order to get their work done).

•   Asset definition:

   name of asset.

   type of asset (usually a multilevel-type system):

–   distinguish between time-based assets like shots and sequences, “physical” (renders out as an image) assets like models, and “virtual” (doesn’t directly produce pixels) assets like light rigs, articulation rigs, and scripts,

–   within models, generally a model-type system is used to specify characters, props, architecture, vehicles, etc., as determined by the production, and

–   within shots, it may be necessary to specify shot types to distinguish full cg from plate-integrated shots or to specify shots that need a special post process, etc.

   status (in the production, or omitted—do not remove all traces of deleted assets from the production in case someone changes his or her mind and it’s necessary to revive them later),

   files:

–   included files: root directory that the files are stored within—this is usually thought of as the asset on disk; a complete list of all checked-in files associated with that asset, in a revision control system that is either part of the DAM system or hooked up to it and

–   referenced assets: references to only the conceptual asset and, depending on the design of the database and the code that references it, possibly also the top-level scene or other definition file.

Some information necessary to comprehensive production management can only be determined by looking in both places (query joins, in database parlance). For example, knowing if an asset is done requires querying the tasks associated with the asset and seeing that all workflow tasks are marked as completed (including fix tasks associated with that asset)—and doing this for all referenced assets as well.

It is likely that if the workflow is complex, or the facility large, the deployed system will be one that combines off-the-shelf production-oriented software, like Shotgun and Alienbrain, with customizations (including custom front-ends), links to a render management system and NAS frame storage system, and perhaps also an RDBMS like Oracle and general file revision system like Perforce. This is because no one system is completely suited to the complexity of real-world production management when using a sophisticated digital workflow with many artists working on many disparate tasks and, sometimes, for many clients, all at once. All of this information then needs to come together into a unified view of the state of the production and to output final renders at the end.

SCENE ASSEMBLY

Stephan Vladimir Bugaj

3D Scene Assembly

In order for articulation and shading artists to work on the same character at the same time, some means of breaking up a single model into components and then reassembling it later is necessary. A similar asset break-down strategy applies to all areas of the pipeline, for example, a lighter and an animator working on the same shot at the same time. Most commercial software packages have mechanisms for referencing and importing scene data, and with some file formats, automatic merges are also possible.

Decomposition of a model into components can take the form of separating definitions of different tasks into different files, such as putting shader definition and binding information into separate files that can be version controlled (and locked for editing) separately from the model geometry file. For example, Pixar’s Renderman Artist Tools comes with a system called Slim, hosted in Maya, which has this capability. The software package’s native file referencing and importing system can also be used to provide this functionality for other components that need to be worked on in parallel—especially reusable components like articulation rigs and greeblies.7

A model might be broken down into the following components that get stored in separate files and thus need reassembly:

•   Geometric mesh data:

   Background objects and characters may simply be referenced-in or imported-in collections of reusable components such as robot or car parts or creature heads, arms, legs, and torsos.

   Hero models may start off by referencing-in geometry from a library of reusable components as a starting point.

•   Articulation rig:

   Rigs can be built up from component rig parts.

•   Texturing information:

   Shader definitions/templates

   Shader parameter values (set in instances of the templates)

   Shader bindings

   Projection or tumble paint camera definitions

   Textures

•   Procedurals (articulation or shading components that initiate automatically based on scene data, such as frame number).

A number of artists need to work on these components separately and at the same time, especially at large studios where specialization in areas like modeling, texturing, and articulation becomes necessary. Having a single artist work on all of these components in a single model file isn’t tenable because it extends the timeline necessary to accomplish the work.

Final scenes, by their nature, require even more decomposition. Each scene is made up of a number of models. To maintain consistency (and artist sanity) it is necessary to build and test any models that appear in more than one shot in scene files that exclusively have the model definition data. These test scenes should be completely neutral with regard to lighting and animation. They are usually self-contained without any reference to actual shots in the film. Furthermore, to facilitate the parallel pipelines, which are necessary to get major production jobs done on time and on budget, breaking apart the work on the scenes themselves is often essential as it collapses the timeline.

Scenes may be broken down most naturally into the following components:

•   Matchmove or animation camera layout (including initial dressing of stand-in 3D objects and simple blocking animation of dummy characters or fx stand-ins).

•   Set dressing, which is the placement of the noncharacter objects in the scene. Shot dressing may start with one or more scene/sequence/location level sets that get imported or referenced in and then adjusted for the shot.

•   Animation (keyframed or MoCap) and simulation, and herein the separation is usually made between these two types of time-varying data and between character and visual effects workflows.

   Multiple files of parameter values may be referenced or imported during final scene assembly, because it can be highly beneficial to break apart animation data into separately managed files. An animator references in another’s work to animate against, but only changes his or her own character or visual effects animation.

   Because of the differing data demands, file formats and workflows of different simulation engines for visual effects, cloth and hair, additional files also need to be managed.

   Crowd animation often results in additional files to manage, not only because of the additional characters but also because systems like Massive require their own data.

•   Lighting. As with set dressing, lighting shots often start with referenced or imported sequence-level setups.

All of these things need to come together in order to form a final scene that can then be rendered and composited into the final plate. The data management challenge is to keep track of all of these files as being part of a particular scene or model, using the approaches discussed in the tracking assets section. By knowing what must come together, one can determine when assembly can occur.

Data that is not stored directly in a package scene format usually comes with a developer-supplied workflow for translating that data back into scene data within the primary package. This happens either directly as data in the package or as directives in the scene file that get passed to the renderer, such as by associating a Renderman RIB archive with the scene.

Data that is not native to the package, such as various kinds of simulation, crowd animation, shading, and similar data that may be kept separately from the primary scene file, is assembled back in by either writing the data into the native scene format or by adding to the scene format a callback that runs at render time to output the special data into a format the renderer understands. Some developers do a better job of this than others. Therefore, this is one area in which studios and visual effects facilities invest a lot of in-house development.

Revision control is important with both 2D and 3D scene assembly. It is necessary to ensure that the most current relevant version of the asset is being assembled into the scene. This is especially important on the 3D side where there are many complicated compound assets. An asset tracking system that allows for revision control, labeling of revisions, locking a particular scene (2D or 3D) to a revision label, and annotation of versions is essential. Scenes, and their components, should also be associated with versions of the tools because it is sometimes the case that tool upgrades break older versions of their data files and may require time-consuming fixes by the artists.

The scene assembly process must be fed by a coherent, comprehensive revision management system that will allow the artists to trust that they are using the most appropriate version of the assets. Consider the importance of this when there may be hundreds or even thousands of assets in any given scene. The system itself must default to the most current agreed-on version of the asset and only vary from that upon request from an artist who has a particular need in a particular scene. Otherwise, the complexity encountered during scene assembly will quickly become overwhelming, and a large amount of time and money will be lost dealing with an easily avoidable problem.

Once all off-board scene and model data have been accounted for in the format of the primary 3D package, there is the issue of final model and scene assembly through referencing, importing, or merging. Each of these techniques has advantages and disadvantages. Merging, which is the taking of two files in the package format and combining them, has all the disadvantages of importing. At the same time, it is much more complex. No common packages support doing a merge natively and it is therefore usually done using an in-house or third-party purchased plug-in or script. It is generally only used if the package of choice simply does not support either referencing or importing—a very uncommon situation with modern 3D software. In some cases, if a package only supports referencing, merging may be developed if an import rather than reference is required.

Importing, as a way to assemble a shot or model, is easy in that it is well supported in popular packages such as Maya. The resulting scene or model file is independent of other files. An import takes the data from the external file, such as an articulation rig or some greeblie geometry and writes that data locally into the scene file. Once the import is complete, the component(s) from the external file are now defined in the local scene file and no dependency on the other file is maintained. While this is a simple and reliable solution for small, fast jobs such as TV commercials, it quickly becomes untenable when trying to create large, longterm projects.

On the other hand, referencing, which involves a situation where the final scene or model file will contain data that is loaded from other files, is ideal for larger shops working on larger projects. Referencing (sometimes also called linking, such as in 3ds Max) does not break the connection between the file the reference was created in and the file the reference was created to. This means that any changes in the referenced file get reflected in the referencing file—generally upon reload (few, if any, 3D systems implement in-memory live referencing).

The big advantage of referencing is parallel iterative refinement. Because referencing allows for changes to propagate into all scenes or models that reference the changed subasset, artists can create and check in rough versions of their work, which allows the artists that depend on it to begin their own work. In this way, whole teams may work from rough to fine in parallel. For example, layout creates cameras, rough sets, and blocking; at the same time, set dressers and animators may create rough dressing and blocking animation. From there, lighters may join in by creating the master scene and/or rough shot lighting while the animators and dressers refine their work.

The challenge is that the changes may create unforeseen side effects in the referencing files. Systems for testing change integration and version control over the referenced assets are essential. Scene assembly from references, imports, or merges is as much about data management and developing good practices around testing and communicating changes as it is any technical considerations. The artistic and artist management challenge is to get the decomposition of tasks correct and, thus, definitions of what is referenced and what is local to the model or scene, based on workflow requirements. This greatly depends on the size and budget of the facility and how the skill sets of the artists break down, accounting for any plans to change this breakdown based on training and hiring. The variations on suggestions above, and elsewhere in this chapter, are relatively common within the industry.

Referencing is an essential part of a simultaneous multiuser workflow and large studios thrive on it. Without referencing, large studios could not have teams of sometimes as many as a dozen people working on a single model or shot simultaneously—a crucial part of a large-scale production pipeline. One reason that smaller studios tend to be less fond of referencing is that many 3D packages have limitations and/or problems with their referencing systems. This leads to many hours spent on debugging referencing-related problems and/or writing in-house systems to repair, extend, or bypass the referencing system in the commercial package. Vendors have been improving this ability in their tools for a few years now, however, such that referencing may be worth another look even at the smaller facilities.

2D Scene Assembly (Compositing)

Assembly of a scene in 2D requires a number of input plates that are then put together utilizing the compositing software. Creation of the input plates and management of that data are tied into the rest of the workflow—including 3D scene assembly. Plates come from one of three sources: original camera plates (and/or portions thereof isolated by keying or roto shapes), 2D animation and matte paint layers, and from renders of 3D scene data.

3D scene assembly choices result in 2D plates that feed into the final comp. Usually, the entire 3D scene rendered into a single plate is not what is needed. Creating multiple plates may be done using renderer arbitrary output variables (AOVs) from a single scene file or by selectively turning on and off parts of the entire final scene and then rendering them separately. The latter approach can be achieved by separately rendering intermediate scene files that define components of the shot or by using some notion of layering or grouping in the primary 3D package to turn on and off geometry and lights per pass. Alternatively one could write a custom tool that does this process of sending multiple passes of the scene to a renderer separately. Although this approach seems cumbersome at first, it often turns out to be easier to manage than AOVs, which require render passes to be defined at shader definition time rather than on the fly based on the particular needs of a specific project, sequence, or even individual shot.

Setting up the 3D scene to create render passes for compositing depends on the compositing goals and style of the shop. Level of detail and render resource management considerations can also be taken into account: for example, splitting out the background elements of a shot in order to render them with different, cheaper settings than the hero elements. Both approaches require developing a system, within a given toolset, for defining holdout mattes for objects in the 3D scene that are in front of those in the current render or else a lot of hand-rotoing will be required.

If the primary 3D package cannot already do something like this, a good way achieve this holdout is to create a plug-in to a render prep system. This render prep plug-in would discard all objects behind those defined as being in the pass, at each frame, since objects can change relative positions during the scene. Then the objects in front are identified algorithmically based on their Z-depth, a cheap shader is assigned as the material for this holdout pass, and then the holdout objects are rendered. Subsequently, it is necessary to subtract the holdout alphas (which won’t always be one with motion blur) from the final image alpha.

Here are some passes that are common in the industry:

•   Arbitrary output variables:

   By lighting component (diffuse, specular, reflection, etc.). Usually used by shops that do a lot of lighting/relighting in the compositing software rather than with virtual 3D light rigs.

   By shader type, so that shaders are assigned that have been defined in advance to call particular AOV macros.

   As a render pass system more generally, by creating an inhouse system by which a plug-in is used in the shading and render prep system to associate each surface with an AOV macro at scene assembly time rather than at shader definition time. By doing this, AOVs basically function like the system below.

•   Layering/grouping/multiple component files:

   Separating out foreground/midground/background and rendering with different geometric level of detail (LOD), different renderer LOD and quality parameters, different light rigs, etc., in order to save resources on background elements and prioritize foreground.

   Isolating particular characters, environment elements, or visual effects elements that require specific, separate image treatments in the compositing phase.

   Isolating elements (such as gigantic simulations) that are so resource intensive that they need to be rendered as a separate pass in order to make it through the renderer.

WORKING ACROSS MULTIPLE FACIHTIES

Scott Squires

It is common for large visual effects film projects to be spread over multiple visual effects companies or facilities. This can be due to a factor of schedules, budgets, and specialization. The work is usually broken up by sequences but may in turn be broken up by specific types of visual effects (matte paintings, CG animation, etc.). When possible, it is best to break up the work to minimize the amount of interaction between companies. Overlap of work means that resources and schedules become even more problematic. It can also be very inefficient. This should be reviewed when creating the budget and considering the trade-offs with regard to the number of companies working on the project.

If a shot needs to go through multiple facilities (each facility doing a different CG character, one facility handling the animation and another doing the compositing, etc.), then any change or snag will cause a ripple delay through all of the facilities. If the director makes a change, then the shot may have to go through all of the same stages again. If there’s a conflict then there may be a lot of finger pointing about whose problem it is and how to solve it.

Unfortunately there are many circumstances where there will be overlap among facilities. This section provides some guidance on dealing with this overlap. The visual effects field does not have a lot of technical standards since the software and techniques are rapidly evolving. Many facilities use their own proprietary software or pipeline that they wish to keep private. The technical leads from the different facilities will need to discuss the requirements directly and work together to come up with a feasible solution that minimizes some of the risks. Note that this may even happen with previs in the pre-production stage. A production may move to a different location and require a local facility to rework some of the original previs.

If a project is known to have overlap at the pre-production stage, then the visual effects facilities should be notified in the bidding stages so they can plan accordingly. The facilities may have to agree to certain standards of data formats and transferring. They may also have to write specific programs to help import/export from their internal formats to something another facility can use. The pre-production time should be used to test the interchange of test shots to work out any problems before post-production begins.

The easiest interchange is when facilities use the same off-the-shelf software. Note that this also requires the facilities to be using the same version number. Sometimes visual effects facilities may be behind on updating their software because they are working on a large show and don’t want to risk having problems with the update. They may also have specialized plug-ins, scripts, and programs that they have created that depend on the version they are currently using.

One facility usually takes the lead for specific models or tasks and the other facility may take the lead on other items. Once the model or look is approved by the director, then the files and other data are provided to the second facility. The first facility provides visual references to the second facility for making comparisons.

Even when using the same software, additional work is likely necessary to make it work in another pipeline. In some cases it may be necessary to duplicate the work from scratch, such as building a model, if it is too problematic to transfer the necessary data into a useful form. This is why any overlap needs to be considered in the budgeting stage.

In some cases it may be a 911 (emergency) project where a facility is not able to accomplish what is needed in the time provided. In this case it can be difficult and costly for other facilities to step in quickly and take on some of the work due to the complexity of interchanging data and a very compressed schedule. As studios reduce post-production time and make creative changes very late in the schedule these situations are happening more and more. In the end the studios pay a very large price for creating this situation and the final quality likely suffers as well. At some point the project may have to be delayed due to this type of situation that in most cases could have been avoided.

Images

Scanning and recording facilities have standard image formats to pass images back and forth to visual effects facilities such as DPX.8 The facilities should agree on a standard image format that can be used to pass images back and forth. This format may not be used internally but could be used as an exchange format. This could be EXR9 or other format that maintains the full quality of the image. Image file naming schemes will have to be agreed on along with the issue of color space. A known film clip or standard image is useful to confirm that the color isn’t being shifted through the process inadvertently. Images can be passed via high-speed connections or hard drives.

Models

It is common for CG models to need to be used by multiple facilities. Ideally the facilities use the same software, which makes transferring models easier. Many different model formats are available, each with its various pros and cons. Some standard formats are used for prebuilt models but these may be lacking the additional information required for a complex model. Standard formats also exist for the 3D scanning companies but these models lack some of the naming, structure, and metadata features that will be required for production use. Transferring hard surface models is usually easier than transferring creature models that require special skinning or complex fur and hair. In some cases a base model will be transferred that will have to be modified or rebuilt at least in part to work in another facility’s pipeline. The model lead from each facility should discuss which format works best for a specific project.

Texturing

There are many ways of using textures. Each facility may have a different means of applying the textures or even a different color space and gamma. The textures from one facility may have to be adjusted in a graphics program or in the shader of a 3D program to obtain a result consistent with another facility.

Animation

Many facilities consider their rigging of the models (animation skeleton system) proprietary. The dynamic simulation may be all done using special in-house proprietary software. The animation may require special plug-ins or scripts to handle procedural animation. The facilities may be using totally different animation software. All of these issues make it difficult to pass pure animation back and forth between facilities. In many cases the animation is rendered out as an image element that another visual effects facility can composite. It may be better to render the different lighting passes separately (specular, ambient lighting, etc.) so the other facility can do a final balance in the composite. This allows finessing the image without having to do a full re-render and sending back the element. In some cases the animation is baked out10 to provide what amounts to a 3D model per frame. This way they can be lit and placed by the second facility.

Compositing

Elements are passed between facilities but typically composite projects are not. Much depends on whether the same compositing software is used at all facilities. If the same software is used, the next issue will be the organization of the elements in terms of file names and directory structures. If it’s a complex composite with many elements, it may be faster and easier for the compositor to rebuild the composite rather than reverse engineer the composite done with a different style.

R&D

Research and development is where things become even more difficult. If one facility has developed a specialized software system for creating a specific look, then they may be reluctant to supply the source code or application to a potential competitor. It’s possible a nondisclosure agreement can be signed to allow proprietary information and data to be shared. In some cases, other facilities will have to try to re-create the finished look from scratch. The production VFX Supervisor and Producer will need to discuss this in pre-production with the facilities involved.

Summary

Interchanging shots and data among facilities can be difficult. This always incurs additional costs and time, so it is recommended that the number of facilities whose work overlaps be minimized. When possible, break the work into separate sequences or types of shots to minimize interaction. In all cases take this into account in the planning stages and be sure to determine (and test) a workflow and interchange process between facilities before post-production begins.

1 Hot head: A computer-assisted camera-mount head that can either record the pan and tilt of an operated camera or use previously recorded pan/tilt information that is then played back to drive the camera head mount in a manner that replicates the pan/tilt moves. As the equipment improves, more axes are becoming recordable and capable of being played back to drive the camera.

2 Glue code is code that ties different parts of the pipeline together, whether it is data interchange between programs that write different file formats or interprocess control of one piece of the pipeline by another.

3 A Technical Director (TD) in a feature animation facility may also be called a Technical Artist or Technical Animator, and his or her responsibilities include production disciplines such as Surfacing, Simulation, Articulation, etc. However, a Technical Director in a visual effects facility is usually a lighting artist or developmental lighting artist.

4 A database definition of an asset that is not actually a file on disk.

5 Tumble paint is a term of art used to distinguish 3D paint systems that allow direct painting on the mesh, such as Mudbox or Zbrush, as different from systems that use projection cameras to place textures on a model.

6 Relational database management systems.

7 Greeblies is used here to mean small reusable model geometry components and comes from the term used by ILM to describe an assortment of detail pieces from kit-bashed commercial plastic model kits used on production models.

8 DPX: scanned film file format.

9 EXR: open-source file format for high-resolution images developed by ILM.

10 Bake out: to output in a format that is fixed. In this case the model is no longer animatable but exists essentially as a 3D model for each frame.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset