Chapter 9

Conclusion

9.1 Achievements of Photorealistic Rendering

Photorealistic rendering and global illumination algorithms have come a long way since the publication of the first recursive ray-tracing algorithm in 1979. There has been a gradual evolution from simple algorithms, some of them deemed to be hacks by today’s standards, to very advanced, fully physically based rendering algorithms.

It is now possible, within a reasonable amount of time, to generate an image that is indistinguishable from a photograph of a real scene. This has been achieved by carefully researching the physical processes that form the basis of photorealistic rendering: light-material interaction, light transport, and the psychophysical aspects of the human visual system. In each of these domains, extensive research literature is available. In this book, we have tried to give an overview of some of these aspects, mostly focusing on the light transport mechanism. As in most modern algorithms, we strongly believe that a good understanding of all fundamental issues is the key to well-designed global illumination light transport algorithms.

Global illumination has not yet found its way to many mainstream applications, but some use has already been made in feature-animation films and to a limited extent in some computer games. High-quality rendering of architectural designs has become more common (although still unusual), and car manufacturers have become more aware of the possibilities of rendering cars in real virtual environments for glossy advertisements. Moreover, recent advances have indicated that full interactive ray tracing is already a possibility for specialist applications and machinery.

As such, photorealistic rendering has certainly propelled forward the development of high-quality visualization techniques.

9.2 Unresolved Issues in Photorealistic Rendering

Research in photorealistic rendering is still alive and well, with a large number of publications devoted to the topic every year. There are still a number of unresolved issues, which will undoubtedly form the topic of future research. We have tried to compile a few topics we think will become heavily researched in the near future:

Acquisition and modeling of BRDFs. There has been quite some effort to measure the BRDF of real materials and to design usable models for use in computer graphics, but this whole field still needs a lot of research to provide us with reliable, accurate, and cheap ways to evaluate BRDF models. Measuring devices such as gonio-reflectometers should be made adaptive, such that they can measure more samples in those areas of the BRDF where more accuracy is needed. Image-based acquisition techniques will be used much more often, driven by cheaper digital cameras.

Acquisition of geometry and surface appearance. Computer vision has developed several techniques for acquiring the geometry of real objects from camera images, but it is still a major problem when the surface of the object is nondiffuse or when the nature of the illumination on the object is unknown. Surface appearance, such as textures and local BRDFs, has recently been captured based on photographs as well. Combining these two fields in order to build an integrated scanner seems a very promising research area. Also, emphasis should be placed on in-hand scanning, where the user manipulates an object in front of a camera and all relevant characteristics are captured.

Self-adaptive light transport. The light transport simulation algorithms outlined in this book come in many different flavors and varieties. Some algorithms perform better in specific situations than others (e.g., radiosity-like algorithms behave better in pure diffuse environments, ray tracing works well in highly specular scenes, etc.) Little effort has been made so far to try to make an overall global illumination algorithm that behaves in an adaptive way in these various situations. Such an algorithm would pick the right mode of simulating the light transport, depending on the nature of the surfaces, the frequency of the geometry, the influence on the final image, etc. Also, partially computed illumination results should always be stored and available for future use by different light transport modes.

Scalable and robust rendering. Scenes that include very high complexity in illumination, materials, and geometry remain challenging. Better and cheaper acquisition technology is driving the demand for rendering such complex scenes in the future. Currently, a user has to manually pick approximations, rendering algorithms, and levels of detail to achieve reasonable quality and performance for such scenes. But this manual approach is clearly not desirable, particularly when we get to the realm of applications such as games where players interact with dynamically varying scenes while generating content on the fly. Robust algorithms that can scale to complex scenes and can automatically handle scene complexity without user intervention will be critical in the future.

Geometry-independent rendering. Current light transport algorithms assume that the geometry of the scene is known and explicitly compute a huge number of ray-object intersections in order to know where light gets reflected off surfaces. In the future, it is likely that primitives, whose geometry is not explicitly known, will be used in scenes to be rendered. Such primitives might be described by a light field, or another implicit description of how light interacts with the object (e.g., a series of photographs). Incorporating such objects in a global illumination algorithm will pose new problems and challenges. Also, storing partial illumination solutions independent of the underlying geometry (e.g., photon mapping) should be researched further.

Psychoperceptual rendering. Radiometric accuracy has been the main driving force for global illumination algorithms, but since most images are to be viewed by human observers, it is usually not necessary to compute up to this level of accuracy. New rendering paradigms should be focused around rendering perceptually correct images. A perceptually correct image does not necessarily have all the radiometric details, but a viewer might still judge the image to be realistic. It might be possible not to render certain shadows, or to drop certain highlights, or even simplify geometry, if this would not harm the human observer judging the image as being realistic. Radiometric accuracy is best judged by comparing a rendered image with a reference photograph and measuring the amount of error. Psychoperceptual accuracy is probably best judged by having a human look at the rendered picture and asking whether the picture looks “realistic.” However, at this point, very little research is available about how this could be done.

Integration with real elements. It is likely that more integration between real and virtual environments will become an integral part of many applications. This does not only entail putting real objects in virtual scenes, but also putting virtual elements in real scenes, e.g., by using projectors or holography. A perfect blend between the real and virtual elements becomes a major concern. This blend includes geometric alignment of real and virtual elements, but also consistent illumination. For example, a virtual element could throw shadows on real objects and vice versa. Developing a good framework for achieving such an integrated rendering system will probably evolve into a major research field during subsequent years.

As a major theme covering all these issues, one can think, or dream, about what the ultimate photorealistic rendering would look like in the future. It is very hard to make any predictions about any specific algorithmic techniques, but it is nevertheless possible to list a few of the requirements or features such a rendering tool should possess:

Interactivity. Any rendering algorithm of the future should be able to render scenes at interactive speeds, irrespective of scene or illumination complexity.

Any material, any geometry. All possible materials, from pure diffuse to pure specular, should be handled efficiently and accurately. Moreover, any type of geometry should be handled as well, whether it is a low-complexity polygon model or a scanned model containing millions of sample points.

Many different input models. It should be possible to take any form of input, whether it is a virtual model or a model based on acquisition from the real world. This probably means leaving the classic polygon model and texture maps for describing geometry and surface appearance and adapting other forms of geometry representation.

Realism slider. Depending on the application, one might settle for different styles of realism: for example, realistic lighting as one would experience in real life; studio-realism with lots of artificial lighting designed to eliminate unwanted shadows; lighting designed for optimally presenting products and prototypes, etc. This should be possible without necessarily altering the scene input or configuration of the light sources.

9.3 Concluding Remarks

Computer graphics is a very exciting field in which to work and is probably one of the most challenging research areas in computer science because it has links with many other disciplines, many of them outside the traditional computer science community. It is exactly this mix with disciplines such as art, psychology, filmmaking, biology, etc. that makes computer graphics very attractive to many students and enthusiasts.

The authors have an accumulated experience of more than 40 years in this field, but we still have the ability to be amazed and surprised by many of the new exciting ideas that are being developed each year. By writing this book, we hope to have made a small contribution in keeping people motivated and enthusiastic about computer graphics, and we can only hope that someday in the future, an exciting new computer graphics technique will develop from some of the ideas presented here.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset