13

The Future of Digital Film

Almost everything in this book so far covers actual working practices in use today. But the digital intermediate is a rapidly evolving field. Within a year or two, new techniques and systems will be in use, giving rise to new methodologies and paradigms. For this reason, it’s useful to look at the possibilities of digital film mastering in the future, to try to anticipate where this technology might go, and where it has to go to overcome key issues.

Speculation on future trends is an inaccurate process by definition. So many things can change in a short space of time, and one small change can rapidly affect the bigger picture. However, we can make certain projections about the near future of the industry—predictions based on trends that have emerged in associated fields, namely computer and digital technology.

Hardware will become cheaper and more powerful. This single assumption, based on “Moore’s Law,” has a drastic effect throughout the digital pipeline.1 Traditional mastering is relatively unaffected by changes in the cost and power of digital technology, because the majority of the components are mechanical or chemical. However, a drop in cost and increase in performance is intimately tied to the performance of the digital lab. Consider storage media: not only will the disk space necessary to store a feature film soon be affordable to individual desktop computers, but it may even make the sometimes tedious procedure of data management more … manageable. This trend has been observed in the last few years, where the average digital storage capacity of a digital intermediate facility has increased from several terabytes to several tens of terabytes, requiring digital intermediate facilities to concurrently handle multiple productions.

As CPU power increases, relative to price, we can do more with the technology, achieving higher levels of quality, as well as having the ability to offer more intensive processes, such as filtering. Where once film images required custom-made hardware to display and process scanned film footage in real time, this capability can now be found on a number of off-the-shelf desktop PCs. A four-fold increase in available storage space, CPU power, bandwidth, and RAM is the difference between the current 2k mastering systems and future, higher-quality 4k film mastering.

The industry will grow. The digital intermediate process for film inevitably will gain momentum over traditional labs, for a number of reasons, but probably the overriding factor is that eventually, finishing a film chemically will be simply more expensive than finishing a film digitally. Smaller companies will offer innovative ways to perform parts of the digital-mastering service at much lower cost. For example, a small facility could offer color-grading services for a film, without offering a comprehensive film-scanning and recording service. The facility could opt to “farm out” those services to scanning boutiques.

13.1 The Future Of Imaging

Digital imaging methods are already starting to replace other methods. At the consumer level, digital cameras already outsell film-based cameras by a large margin. Filmmakers are already experimenting with using high-definition digital video cameras for film production rather than the traditional 35mm pipeline. Some of them enjoy the increased freedom to experiment and the faster production turnaround, while others lament the inferior picture quality. Certainly no video camera currently on the market can match the 35mm format in terms of picture quality. However, new advances are slowly improving image quality and creating solutions for a number of other problems.

13.1.1 High Dynamic Range Imaging

High dynamic range (HDR) images were originally conceived to solve problem in digital photography and 3D imaging simulations. One of the frequent complaints of professional photographers who use both digital formats and film formats is that digital images simply don’t have the dynamic range of their film-based counterparts. Take a photo of a scene using a film camera, and compare it to a digital image of the same scene, and the differences in color become apparent.

Many details in the color of a photograph aren’t even visible in such a straightforward comparison. For example, consider a photograph of a direct light source, such as a bare light bulb. The photograph and the digital image may look identical in terms of how the images appear, but in fact, the contents will be very different. If you were to decrease the brightness of the digital image, the light source would turn gray. However, if you were to decrease the brightness of the photographic image, the element would become visible, as does the scene beyond the bulb.

Digital intermediate facilities can replicate this result to some degree by using logarithmic image formats, such as Cineon or DPX files. Unfortunately, these files are typically limited to 10 bits of luminosity, providing a range from 1–1024, which is lower than film’s range by a factor of ten.2 Even without this limit, the vast majority of monitors that display digital images are themselves limited to around 100 steps of luminosity. In the digital intermediate pipeline, this isn’t an inherentproblem, because it’s possible to scan film and output it directly to a new piece of film with marginal loss in visible tonality. However, anytime the luminance of an image is modified (which is a frequent occurrence during digital color grading), decisions are made that affect the color—decisions based on feedback from the digital image display. Images are darkened and brightened, contrast increased or reduced, based upon the image’s appearance on a digital device (although major modifications are usually checked on a test film print). A combination of careful color calibration and an experienced colorist ensures that the desired effect will ultimately be achieved in the film version, but it’s still a destructive process, reducing the quality of the image. An additional problem concerns long-term storage. How is it possible to guarantee an image will display correctly on devices that may not exist yet?

As we’ve seen throughout this book, one of the difficulties in working with color images, particularly those that originate from one type of media and are destined for another, is that it’s difficult to guarantee that a given color in one media will display as intended in another. The simple act of scanning a frame of film for video output moves it through three different color spaces—the original film color space, the color space of the scanner, and the color space of the video. Provided the same types of film, the same scanner, and the same output device are used, it’s possible to “reverse-engineer” the process to determine exactly which colors are transferred accurately and which aren’t. This process is currently used by most facilities to complete a feature film digital intermediate, where the source material is film and the output is a combination of film and video. But as the number of different input and output formats increase, which is likely to happen over the next few years, this process becomes much harder to control.

HDR images aim to solve these problems by encoding colors using physical values, in terms of their electromagnetic wavelengths (also known as CIE or XYZ color spaces) and a more “absolute” measurement of luminance.

HDR images have the potential of recording arbitrary numbers (similar to density readings) for pixel values, rather than on a fixed scale. With this paradigm, each pixel is instead given a floating-point number (e.g., 0.9 or 3.112), and the bit depth instead relates to the precision of the number (i.e., the number of decimal places it’s accurate to). Images of this type are also referred to as “floating-point images.” This means, in practical terms, that film might then be sampled for density at each pixel, rather than as a fraction of the maximum and minimum brightness allowed by the system. This, in turn, means that different film stocks will be more accurately represented digitally, and it relieves much of the burden on the scanner operator in manually setting the dynamic range.

Furthermore, it has great implications for the color-grading process because it provides a useful set of data to work with. Finally, HDR images aren’t prone to effects such as clipping or crushing of luminance levels, because the information isn’t destroyed. In the same way as it’s possible to push a video signal outside of viewable limits and then recover it later, floating-point numbers can be increased beyond a viewable limit and reduced later if necessary.

Working with images in the HDR format achieves two distinct aims. First, it ensures colors are recorded more accurately. Second, the HDR format can more effectively mix images that have different native color spaces, and it can output to any number of different media, provided the color space of each device is known. For example, video could be digitized and stored as an HDR image. Given the color space of video, each YIQ (or YUV) value could be encoded as an XYZ value. Having done so, the video could be output to a variety of media, such as film, paper, video, and digital projection by converting the XYZ values as needed for the particular output requirements (a process known as “tone mapping”). For example, the film and digital color spaces may encompass all of the encoded XYZ values, meaning they could be output without modification, while the others could use a conversion process to approximate the desired result.

Encoding chromaticity information in this way would also immensely aid the color-grading process. Experienced colorists could check colors based on their physical values, rather than relying on feedback from the monitor. Color-grading software could account for the color space of the display used to make grading decisions and compensate for the limits of the display when altering the image colors. In addition, the image could be displayed in a variety of different ways without affecting the content. For example, it’s possible to simulate exposure changes in photographs that have been converted to HDR images.

images

Figure 13–1   With conventional image formats, a number of color-spaceconversion processes are used

images

Figure 13–2   With HDR images, the native color space is retained, requiring tone mapping for output purposes

Several file formats already exist that can take advantage of this extended color space. TIFF files can be encoded using a 24-bit or 32-bit LogLuv method (i.e., logarithmic luminance values against U and V CIE coordinates) or the Pixar 33-bit specification. Two new types of file formats—Industrial Light & Magic’s OpenEXR specification (www.openexr.org), and the images native to the Radiance lightingsimulation system (radsite.lbl.gov/radiance)—are also in widespread use and offer similar functionality. In addition, the camera raw files created by some digital cameras may also qualify as candidates for HDR imaging, although these files inevitably vary by camera design and manufacture, making compatibility problematic. In time, Adobe’s recent Digital Negative (DNG) open file specification may help to unify the different raw formats, ultimately resulting in future digital and digital video cameras having the ability to output to a single format.

Using digital technology also allows images to be combined to increase the effective dynamic range. For example, an image of a scene recorded onto a typical piece of photographic film has around 5–8 stops of latitude. It’s possible to increase this latitude further by taking an additional picture of the scene at higher levels (or lower levels, or both higher and lower) of exposure, capturing detail in areas that were outside this range in the original image. Provided the image content is exactly the same (i.e., the lighting or composition of the scene hasn’t been changed), and knowing the exposure setting of each picture, the images can be combined digitally, creating a new image with a far greater dynamic range of even the original film image.

images

Figure 13–3   Multiple exposures can be combined to create a high dynamic range image

Several systems are already designed to accommodate some HDR formats—such systems as Idruna’s Photogenics HDR image-editing system (www.idruna.com) and may also make it possible to combine images of multiple exposures to create a digital image of extremely high dynamic range.

In the future, any of these HDR formats might become the digital intermediate standard, providing increased color accuracy and compatibility with a range of output media, and ensuring greater longevity of archives.

13.1.2 Future Displays

Many digital intermediate facilities have been aware of the limitations of conventional CRT computer monitors for a while now, in terms of luminance, dynamic range, and color space—and also in psychological terms. Viewing images on a small screen is a completely different experience than viewing the same image projected onto a much larger screen (which is one reason why people still go to the cinema to watch films). It can be important to replicate these more theatrical conditions to get a better idea of how the final program will be presented. Even when the final output is video, viewing the footage on a larger screen can be very beneficial.

images

Figure 13–4   Idruna’s Photogenics HDR software enables you to edit high dynamic range images

The current trend is toward digital projectors as the primary method of display. Digital projectors, until recently, were of inferior resolution when compared to HD video monitors, and the projectors lacked accurate color rendition and luminance power. The latest digital projectors provide resolution that’s at least equal to HD video, with the added benefit of a larger color space and greater dynamic range. Future projectors, such as the Sony 4k SXRD (www.sony.com/professional), which can display images in sizes up to 4096 × 2160 pixels, should be able to play digital images with a level of quality that’s close to that of a first-generation film print.

13.1.3 The Future of Photographic Film

Many people refer to the “imminent death” of photographic film as a capture medium. The truth is, film is unlikely to die any time soon for a number of reasons. First of all, it still remains the highest-quality medium for image capture today. Second, and perhaps more importantly, many filmmakers simply enjoy the process of working with film. It’s a robust format, the pitfalls are well known, and experienced cinematographers can anticipate the results of a shoot before the film is even developed.

images

Figure 13–5   Sony’s 4k SXRD projector is able to display high resolution images

Eventually, the increased quality and convenience of digital capture methods may indeed make film redundant (although even then, undoubtedly, many will still use it as a format). When film actually becomes redundant depends as much upon advances in film stock as on advances in digital imaging. New film stocks may emerge to better bridge the gap between shooting on film and taking advantage of the digital intermediate process.

Film stocks such as Kodak’s Vision 2 HD super-16mm stock (www.kodak.com/go/motion) are designed to ease the process of transferring and matching the color and look of 16mm film in an HD environment with film and imaging technology designed specifically for that purpose. Future film stocks may continue to expand upon this idea, perhaps even making film a more viable capture format for video advocates.

13.1.4 The 4k Problem

So-called “4k” digital intermediates are the current “holy grail” (or one of them, at least) of most film-based digital intermediate facilities. Images of 4k (i.e., those that contain approximately 4000 pixels per line) are thought to closely match the spatial resolution of most film stocks and therefore can facilitate the migration from 35mm film formats to digital ones without audiences being able to detect any quality difference. Many distributors are therefore waiting until 4k digital intermediate pipelines become commonplace before adopting the widespread use of the digital intermediate process.

The problem with 4k images is that they are very difficult to manage. It’s only relatively recent that working with 2k (i.e., containing approximately 2000 pixels per line, roughly the same resolution as the high-end HD video formats) digital images is convenient. Even so, resources are often stretched to the limit. Images at 2k consume huge amounts of disk space and processing power, and they require a long time to copy or edit, especially compared to SD video images.

Working with 4k images quadruples the processing, timing, and storage requirements, because they have four times the number of pixels of their 2k counterparts. For the vast majority of facilities, this amount is simply too much data to handle. For a start, these requirements necessitate a choice between doing four films at 2k or one at 4k. Since 4k mastering doesn’t generally pay four times that of 2k mastering, it becomes an economic issue as well. Second, 4k data can’t be moved fast enough to maintain the level of interactivity and feedback of 2k pipelines. Many systems can play 2k images in real time, whereas they can’t play 4k images at that speed. While it may be possible to view 2k proxies and apply changes to the 4k source data, this approach, to some extent, defeats the purpose of using 4k images.

In the march of progress, 4k pipelines will become more feasible and therefore more available. However, it’s likely that this change will be brought about by improved technology rather than the desires of distributors.

13.2 Future Di Pipelines

Changes will be made to existing pipelines, and new work flows and methods will be developed to take advantage of the improved technology and the new features. It may well be that the industry changes from being facility-based to a more production-centric approach: digital intermediate teams will be freelance, and they’ll be assembled for a specific production. All the necessary equipment will be leased for the duration of the production as opposed to the current trend of facilities providing all the services for the production. It may be that the large studios will establish their own digital intermediate departments and will use those departments for every production they produce.

New work flows should help to better standardize the process—such as by introducing well-defined QC chains throughout each production and possibly even tagging every shot with information (i.e., metadata) that’s updated throughout the process.

13.2.1 Storage Devices

One problem with digital storage devices is that they aren’t intrinsically visual. A piece of film can be placed on a lightbox, or even simply held in front of a light, to display its content. A video can be put into a VCR for convenient viewing on a television or video monitor. With a digital image, it must first be copied across to a suitable system and then viewed on a monitor using an appropriate software package. This requirement makes it very difficult to quickly identify the contents of a tape backup that was made a few weeks previously. Of course, some asset management systems can maintain thumbnail images of the contents of different storage devices, but such systems (i.e., ones capable of storing the thumbnail images) can’t always be accessed. Instead, it’s possible that the storage devices themselves will be able to display content.

This functionality is already possible with lower-capacity portable storage devices. For example, the Archos AV4100 (www.archos.com) is able to (among other things) store and display 100GB of images. Of course, this product isn’t necessarily designed for storing the images typically used in a digital intermediate environment, and it’s much more expensive than a regular 100GB portable disk drive. In time, however, other storage manufacturers may follow Archos’s lead.

Alternatively, such devices can be used to maintain the reference material of all the rushes. After a scene has been shot and transferred to a digital format, it can be copied, compressed, and placed on the device. Filmmakers can then use the device to check rushes, or perhaps even rough cuts, while away from the screening rooms.

13.2.2 Portability

As technology improves, it tends to shrink as well. Smaller equipment is more portable and therefore can be used on the set. This situation, in turn, gives rise to accessing new options during filming. First of all, more immediate feedback is available. A colorist can apply quick grades to some of the footage, which may help determine whether complex shots should be relit and rephotographed to obtain higher-quality results. The same is true of any other aspect of the digital intermediate process—effects can be applied to shots for quick previews, and the footage QCd (or quality checked) for problems. Of course, the fast pace of most production sets may prevent the use of such work flows because they could slow down the production, but at least, the option will be available.

images

Figure 13–6   The Archos AV4100 can be used as a model for more visual data storage devices

13.2.3 Ease of Use

As advances to the hardware are made over time, the software and the various component’s user interfaces have also improved. Each new version of a particular software product tends to bring about improvements aimed at a wider audience, and the new product is usually easier to operate than previous versions were, making it possible for crew members to be trained to learn certain aspects of a system. At the least, images and other digital media can be easily displayed on the set, without requiring specific operators to perform this process.

Other equipment will be introduced that provides more intuitive interaction with digital media. For example, Wacom’s Cintiq graphics tablet (www.wacom.com) enables the user to interact with a computer system simply by drawing on the screen with a stylus.

13.3 Distribution

One of the most important aspects of creating moving pictures is ensuring people can see them. At the present time, films are distributed to cinemas, made available for rental, sold on DVD, and even broadcast on TV and airplanes. Other types of productions may be shown on television or the Internet. Industry developments may bring about new methods and types of distribution and may make existing distribution methods more accessible.

images

Figure 13–7   Wacom’s Cintiq product range makes for a very intuitive interface device

13.3.1 Digital Cinema

Perhaps the next significant change to the industry will be the widespread adoption of digital cinema. At the moment, digital cinemas are rare, and so outputting to a digital cinema format is not often requested. Probably the most established body in this area, the Digital Cinema Initiative (DCI), a cinema distributor consortium is in the process of compiling guidelines for the digital cinema format. These guidelines will cover compression, frame rates, and image sizes. (For information on these guidelines, refer to the Appendix.) Once these specifications have been finalized and published, a natural shift in many digital intermediate pipelines will occur to better accommodate them, more so once digital cinema starts to gain prominence.3

Creation of output for digital cinema involves taking the digital source master, created at the end of the digital intermediate process, and using it to create a digital cinema package (DCP) that combines visual, aural, and verbal components within a compressed, encrypted data set. The facility creating the digital intermediate (and hence the digital source master) may be capable of creating the DCP, or perhaps an external facility can use the digital source master to create the DCP.

Once completed, the DCP is ready for duplication, and then it’s transported, using physical media, satellite feeds, or networking as required, to each cinema. It may then be combined with other elements, such as forthcoming trailers and advertisements, into a playlist, which can be displayed.

13.3.2 Internet Distribution

The Internet is bound to become an important distribution channel in the long-term. Right now, each user’s average bandwidth is too limited to transmit footage of acceptable quality in a reasonable time. The problem is complicated somewhat by the fact that transmission speed tends to vary, making it difficult to accomplish real-time playback from a server. Other pitfalls will be encountered, too: footage has to be encoded to work specifically with different playback software that users may or may not have, and no one is entirely sure how to effectively prevent material from being pirated. However, for some, these issues aren’t important—the casual user, creating freely viewable content for a website, isn’t concerned about pirated copies, nor is such a user troubled by the fact that certain people may not be able to view the encoded footage.

One benefit of using Internet distribution methods is that it allows schedule-independent viewing. At present, cinemas are efficient only when every screening is fully booked. To improve the odds of this happening, each film is shown according to demand. Popular films are shown regularly, even on multiple screens. But from the audience’s point of view, a moviegoer may want to see a less-popular film, and such films are shown at inconvenient times (which, then, of course, results in the film becoming even less popular and a vicious circle ensues). Similarly, television broadcasts are programmed for specific times, so when viewers miss a broadcast (and they didn’t set their VCR to record it), they won’t get to see it. With the Internet, on the other hand, this issue becomes much easier to deal with. The program is always available to watch, and requires resources only when it’s being watched (aside from the space required on the distribution server), making it a far more efficient distribution method. Further, programs on the Internet are accessible within the home, which provides an added layer of convenience.4

New developments are also improving the speed of data distribution across the Internet. For example, the Bittorrent protocol improves speed during high-traffic periods by using a data-swarming approach. When many users access a single file, rather than each of them transferring it from a single location (which places a lot of strain on the originating server and significantly slows transmission), the users access parts of the file from other users. With very popular files, the transmission rate may even be higher than the maximum bandwidth of the originating server. Moreover, such a system is easily implemented, with freely available systems such as Downhill Battle’s Blog Torrent (www.blogtorrent.com), which features a simple means of installation on any website.

Internet distribution probably won’t mature until it becomes more accessible to the casual user. Therefore, for example, set-top boxes that can connect to the Internet will simplify the downloading and playback processes. However, it’s unlikely that the average user bandwidth is going to be increased dramatically any time soon, which means that material must either be of low quality to be played as soon as it’s requested, or that higher-quality material takes several hours to transmit before the user can view it. Sony has recently announced plans to create a downloadable service, whereby people can pay for films to be downloaded to their cell phones, which may well serve to kick-start a whole new era of distribution.

13.4 The Business Model

Many individuals involved in establishing independent facilities specializing in the digital intermediate process want to know how to make it work from an economic standpoint. The only real need for independent digital intermediate facilities at the present time is for producing telecines, feature films, online videos (perhaps) and for converting video to film. These processes require specialized equipment and technical expertise. The facilities can usually handle, rather easily, other pipelines and other aspects of establishing independent facilities.

The paradox is, of course, that such equipment and expertise costs a great deal of money, and thus it’s difficult to price competitively to recover working costs, particularly when, as with feature film production, it isn’t a strictly essential process (at least not until the rise of digital cinema). However, money can be made, and income may be generated from by-products of the process, rather than the sale of the digital intermediate process.

13.4.1 Visual Effects

Many facilities that offer a digital intermediate process also have separate visual effects departments for compositing and animation (this also applies to the use of optical effects and retouching processes). A production that selects a particular facility to do its digital intermediate often ends up also diverting the production of many visual effects shots to the same facility, partly out of convenience but also out of proximity. For example, when certain shots are seen in context for the first time in the digital intermediate environment, the filmmakers may decide that additional effects are required, particularly when they can be produced right down the hall.

13.4.2 Deliverables

One of the most profitable options within the digital intermediate environment is the creation of additional deliverable material. For example, it may cost a great deal of money to digitally master a film and produce a film print for distribution, but additional copies can be produced very quickly, easily, and cheaply. Furthermore, most facilities are equipped to produce video masters (such as HD video versions) with negligible additional expense. And of course, this can be extended to the creation of DVD masters and re-edited versions (such as for television broadcast or airlines). On the other hand, distributors tend to budget separately for each deliverable format, so a facility supplying all the relevant formats should just about break even. Incidentally, deliverable requirements can also involve the production of theatrical trailers, which are typically comprised of elements that have already been scanned and finished in the final product. Therefore, creating trailers simply requires re-editing and outputting.

Finally, once digital cinema becomes more prominent, creating separate digital cinema packages may be required for each production. A facility could specialize in DCP creation for projects that have undergone a digital intermediate process elsewhere, or even for those that haven’t. These projects may only require being transferred and prepared for digital projection.

13.4.3 Digital Dailies

With suitably equipped facilities, every recorded frame can be digitized as soon as it’s available. This capability makes it possible for digital “rushes” (or digital “dailies”) to be generated, so that all the footage requiring editing and viewing can be copied and output to any required formats. This capability will eventually accelerate the digital intermediate process, and produce images of higher quality. In addition, original material will be less likely to become damaged (because it will be handled less).

The problem is that the facility in question must have adequate storage space (around ten times the space required by a cut-together production) to accomplish this process, because most scanned material is never used. In addition, the production will occupy this space for a much longer time than it would if it had been transferred after editing was completed. The concept of digital dailies is covered in Chapter 5.

13.4.4 Production Supervision

As filmmakers become more aware of possibilities of a digital intermediate process, there’s more of a need for supervision during the production. This supervision must be provided by someone from the digital intermediate facility who can advise the production team as to the available options when the digital intermediate process begins. The supervisor is also needed to ensure that images are lit and shot in a suitable way that enables processing later on.

13.5 Summary

At present, the most immediate problem possibly is the lack of standardization. One way this can be remedied is through the widespread adoption of a common image file format, especially one that is colorspace independent, as are several HDR image formats. Tagging each scene and shot with information that can be continuously updated throughout the production will aid the QC process immensely because problems can more easily be traced back to a point of origin. Improved quality of the final output will be produced by generating more masters from the digital source, rather than through the more traditional methods of duplication, which degrade the quality with each generation. These improvements will become more feasible as hardware advances to the point where additional copies can be generated much quicker than they can at present, or as the price of each system decreases so that multiple systems can be used to simultaneously produce output.

Regardless of future changes, the digital intermediate process has a lot of untapped potential for creativity and flexibility, and some of these options are covered in the following chapter.

1 Moore’s Law states that computer systems double in power every eighteen months.

2 The human eye is thought to have a dynamic range of around 10,000:1 under suitable conditions

3 At the time of writing, Ireland announced plans to install approximately 500 digital projectors in cinemas throughout the country, becoming the first country to actively adopt digital cinema; current estimates predict that the majority of DCIcompliant cinemas should be ready during 2006 or 2007.

4 The Internet loves to count things too, so accurate statistical information, such as how many times a particular production has been watched, can be easily obtained.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset