Graphics is a broad field; to understand it, you need information from perception, physics, mathematics, and engineering. Building a graphics application entails user-interface work, some amount of modeling (i.e., making a representation of a shape), and rendering (the making of pictures of shapes). Rendering is often done via a “pipeline” of operations; one can use this pipeline without understanding every detail to make many useful programs. But if we want to render things accurately, we need to start from a physical understanding of light. Knowing just a few properties of light prepares us to make a first approximate renderer.
1.1 An Introduction to Computer Graphics
1.1.1 The World of Computer Graphics
1.1.2 Current and Future Application Areas
1.1.3 User-Interface Considerations
1.4 Goals, Resources, and Appropriate Abstractions
1.4.1 Deep Understanding versus Common Practice
1.5 Some Numbers and Orders of Magnitude in Graphics
1.5.1 Light Energy and Photon Arrival Rates
1.5.2 Display Characteristics and Resolution of the Eye
1.5.3 Digital Camera Characteristics
1.5.4 Processing Demands of Complex Applications
1.6.1 Texture Mapping and Approximation
1.6.2 The More Detailed Graphics Pipeline
1.7 Relationship of Graphics to Art, Design, and Perception
1.9 Polygon Drawing As a Black Box
1.10 Interaction in Graphics Systems
1.11 Different Kinds of Graphics Applications
1.12 Different Kinds of Graphics Packages
1.13 Building Blocks for Realistic Rendering: A Brief Overview
1.13.5 The Human Visual System
1.13.7 Integration and Sampling
1.14 Learning Computer Graphics
2 Introduction to 2D Graphics Using WPF
A graphics platform acts as the intermediary between the application and the underlying graphics hardware, providing a layer of abstraction to shield the programmer from the details of driving the graphics processor. As CPUs and graphics peripherals have increased in speed and memory capabilities, the feature sets of graphics platforms have evolved to harness new hardware features and to shoulder more of the application development burden. After a brief overview of the evolution of 2D platforms, we explore a modern package (Windows Presentation Foundation), showing how to construct an animated 2D scene by creating and manipulating a simple hierarchical model. WPF’s declarative XML-based syntax, and the basic techniques of scene specification, will carry over to the presentation of WPF’s 3D support in Chapter 6.
2.2 Overview of the 2D Graphics Pipeline
2.3 The Evolution of 2D Graphics Platforms
2.3.1 From Integer to Floating-Point Coordinates
2.3.2 Immediate-Mode versus Retained-Mode Platforms
2.3.3 Procedural versus Declarative Specification
2.4 Specifying a 2D Scene Using WPF
2.4.1 The Structure of an XAML Application
2.4.2 Specifying the Scene via an Abstract Coordinate System
2.4.3 The Spectrum of Coordinate-System Choices
2.4.4 The WPF Canvas Coordinate System
2.4.5 Using Display Transformations
2.4.6 Creating and Using Modular Templates
2.5 Dynamics in 2D Graphics Using WPF
2.5.1 Dynamics via Declarative Animation
2.5.2 Dynamics via Procedural Code
2.6 Supporting a Variety of Form Factors
2.7 Discussion and Further Reading
3 An Ancient Renderer Made Modern
We describe a software implementation of an idea shown by Dürer. Doing so lets us create a perspective rendering of a cube, and introduces the notions of transforming meshes by transforming vertices, clipping, and multiple coordinate systems. We also encounter the need for visible surface determination and for lighting computations.
3.6 Discussion and Further Reading
We want you to rapidly test new ideas as you learn them. For most ideas in graphics, even 3D graphics, a simple 2D program suffices. We describe a test bed, a simple program that’s easy to modify to experiment with new ideas, and show how it can be used to study corner cutting on polygons. A similar 3D program is available on the book’s website.
4.2.3 The Structure of a Test-Bed-Based Program
4.6 An Application of the Test Bed
5 An Introduction to Human Visual Perception
The human visual system is the ultimate “consumer” of most imagery produced by graphics. As such, it provides design constraints and goals for graphics systems. We introduce the visual system and some of its characteristics, and relate them to engineering decisions in graphics.
The visual system is both tolerant of bad data (which is why the visual system can make sense of a child’s stick-figure drawing), and at the same time remarkably sensitive. Understanding both aspects helps us better design graphics algorithms and systems. We discuss basic visual processing, constancy, and continuation, and how different kinds of visual cues help our brains form hypotheses about the world. We discuss primarily static perception of shape, leaving discussion of the perception of motion to Chapter 35, and of the perception of color to Chapter 28.
5.3.1 Gross Physiology of the Eye
5.4 Constancy and Its Influences
5.7 Discussion and Further Reading
6 Introduction to Fixed-Function 3D Graphics and Hierarchical Modeling
The process of constructing a 3D scene to be rendered using the classic fixed-function graphics pipeline is composed of distinct steps such as specifying the geometry of components, applying surface materials to components, combining components to form complex objects, and placing lights and cameras. WPF provides an environment suitable for learning about and experimenting with this classic pipeline. We first present the essentials of 3D scene construction, and then further extend the discussion to introduce hierarchical modeling.
6.1.2 Approximating the Physics of the Interaction of Light with Objects
6.1.3 High-Level Overview of WPF 3D
6.2 Introducing Mesh and Lighting Specification
6.2.2 Producing More Realistic Lighting
6.2.3 “Lighting” versus “Shading” in Fixed-Function Rendering
6.3 Curved-Surface Representation and Rendering
6.3.1 Interpolated Shading (Gouraud)
6.3.2 Specifying Surfaces to Achieve Faceted and Smooth Effects
6.4.2 Texturing via Stretching
6.6 Hierarchical Modeling Using a Scene Graph
6.6.1 Motivation for Modular Modeling
6.6.2 Top-Down Design of Component Hierarchy
6.6.3 Bottom-Up Construction and Composition
7 Essential Mathematics and the Geometry of 2-Space and 3-Space
We review basic facts about equations of lines and planes, areas, convexity, and parameterization. We discuss inside-outside testing for points in polygons. We describe barycentric coordinates, and present the notational conventions that are used throughout the book, including the notation for functions. We present a graphics-centric view of vectors, and introduce the notion of covectors.
7.4.1 Inverse Tangent Functions
7.6.2 How to Think About Vectors
7.6.8 An Implicit Description of a Line in a Plane
7.7.1 Parametric-Parametric Line Intersection
7.7.2 Parametric-Implicit Line Intersection
7.8 Intersections, More Generally
7.9.3 Half-Planes and Triangles
7.10.2 Interiors of Nonsimple Polygons
7.10.3 The Signed Area of a Plane Polygon: Divide and Conquer
7.10.4 Normal to a Polygon in Space
7.10.5 Signed Areas for More General Polygons
7.10.7 Analogs of Barycentric Coordinates
8 A Simple Way to Describe Shape in 2D and 3D
The triangle mesh is a fundamental structure in graphics, widely used for representing shape. We describe 1D meshes (polylines) in 2D and generalize to 2D meshes in 3D. We discuss several representations for triangle meshes, simple operations on meshes such as computing the boundary, and determining whether a mesh is oriented.
8.2.2 A Data Structure for 1D Meshes
8.3.3 Memory Requirements for Mesh Structures
8.4 Discussion and Further Reading
A real-valued function defined at the vertices of a mesh can be extended linearly across each face by barycentric interpolation to define a function on the entire mesh. Such extensions are used in texture mapping, for instance. By considering what happens when a single vertex value is 1, and all others are 0, we see that all our piecewise-linear extensions are combinations of certain basic piecewise-linear mesh functions; replacing these basis functions with other, smoother functions can lead to smoother interpolation of values.
9.2 Code for Barycentric Interpolation
9.2.1 A Different View of Linear Interpolation
9.3 Limitations of Piecewise Linear Extension
9.3.1 Dependence on Mesh Structure
9.4.2 Which Interpolation Method Should I Really Use?
9.5 Functions Multiply Defined at Vertices
9.6 Application: Texture Mapping
9.6.1 Assignment of Texture Coordinates
9.6.2 Details of Texture Mapping
9.6.3 Texture-Mapping Problems
10 Transformations in Two Dimensions
Linear and affine transformations are the building blocks of graphics. They occur in modeling, in rendering, in animation, and in just about every other context imaginable. They are the natural tools for transforming objects represented as meshes, because they preserve the mesh structure perfectly. We introduce linear and affine transformations in the plane, because most of the interesting phenomena are present there, the exception being the behavior of rotations in three dimensions, which we discuss in Chapter 11. We also discuss the relationship of transformations to matrices, the use of homogeneous coordinates, the uses of hierarchies of transformations in modeling, and the idea of coordinate “frames.”
10.3 Important Facts about Transformations
10.3.1 Multiplication by a Matrix Is a Linear Transformation
10.3.2 Multiplication by a Matrix Is the Only Linear Transformation
10.3.3 Function Composition and Matrix Multiplication Are Related
10.3.4 Matrix Inverse and Inverse Functions Are Related
10.3.5 Finding the Matrix for a Transformation
10.3.6 Transformations and Coordinate Systems
10.3.7 Matrix Properties and the Singular Value Decomposition
10.3.9 The SVD and Pseudoinverses
10.6 Why Use 3 × 3 Matrices Instead of a Matrix and a Vector?
10.7 Windowing Transformations
10.8 Building 3D Transformations
10.9 Another Example of Building a 2D Transformation
10.11 Application: Rendering from a Scene Graph
10.11.1 Coordinate Changes in Scene Graphs
10.12 Transforming Vectors and Covectors
10.12.1 Transforming Parametric Lines
10.13 More General Transformations
10.14 Transformations versus Interpolation
10.15 Discussion and Further Reading
11 Transformations in Three Dimensions
Transformations in 3-space are analogous to those in the plane, except for rotations: In the plane, we can swap the order in which we perform two rotations about the origin without altering the result; in 3-space, we generally cannot. We discuss the group of rotations in 3-space, the use of quaternions to represent rotations, interpolating between quaternions, and a more general technique for interpolating among any sequence of transformations, provided they are “close enough” to one another. Some of these techniques are applied to user-interface designs in Chapter 21.
11.1.1 Projective Transformation Theorems
11.2.1 Analogies between Two and Three Dimensions
11.2.3 Axis-Angle Description of a Rotation
11.2.4 Finding an Axis and Angle from a Rotation Matrix
11.2.5 Body-Centered Euler Angles
11.2.6 Rotations and the 3-Sphere
11.2.7 Stability of Computations
11.3 Comparing Representations
11.4 Rotations versus Rotation Specifications
11.5 Interpolating Matrix Transformations
11.6 Virtual Trackball and Arcball
11.7 Discussion and Further Reading
12 A 2D and 3D Transformation Library for Graphics
Because we represent so many things in graphics with arrays of three floating-point numbers (RGB colors, locations in 3-space, vectors in 3-space, covectors in 3-space, etc.) it’s very easy to make conceptual mistakes in code, performing operations (like adding the coordinates of two points) that don’t make sense. We present a sample mathematics library that you can use to avoid such problems. While such a library may have no place in high-performance graphics, where the overhead of type checking would be unreasonable, it can be very useful in the development of programs in their early stages.
12.4 Specification of Transformations
12.5.1 Projective Transformations
12.7 Associated Transformations
13 Camera Specifications and Transformations
To convert a model of a 3D scene to a 2D image seen from a particular point of view, we have to specify the view precisely. The rendering process turns out to be particularly simple if the camera is at the origin, looking along a coordinate axis, and if the field of view is 90° in each direction. We therefore transform the general problem to the more specific one. We discuss how the virtual camera is specified, and how we transform any rendering problem to one in which the camera is in a standard position with standard characteristics. We also discuss the specification of parallel (as opposed to perspective) views.
13.3 Perspective Camera Specification
13.4 Building Transformations from a View Specification
13.5 Camera Transformations and the Rasterizing Renderer Pipeline
13.7 Camera Transformations and the Modeling Hierarchy
13.8.1 Aspect Ratio and Field of View
13.9 Discussion and Further Reading
14 Standard Approximations and Representations
The real world contains too much detail to simulate efficiently from first principles of physics and geometry. Models make graphics computationally tractable but introduce restrictions and errors. We explore some pervasive approximations and their limitations. In many cases, we have a choice between competing models with different properties.
14.2 Evaluating Representations
14.2.1 The Value of Measurement
14.4 Building Blocks of Ray Optics
14.5 Large-Scale Object Geometry
14.5.3 Spline Patches and Subdivision Surfaces
14.6.2 Billboards and Impostors
14.9.1 Scattering Functions (BSDFs)
14.10 Translucency and Blending
14.11.2 Direct and Indirect Light
14.11.3 Practical and Artistic Considerations
14.11.4 Rectangular Area Light
14.11.9 A Unified Point-Light Model
15 Ray Casting and Rasterization
A 3D renderer identifies the surface that covers each pixel of an image, and then executes some shading routine to compute the value of the pixel. We introduce a set of coverage algorithms and some straw-man shading routines, and revisit the graphics pipeline abstraction. These are practical design points arising from general principles of geometry and processor architectures.
For coverage, we derive the ray-casting and rasterization algorithms and then build the complete source code for a render on top of it. This requires graphics-specific debugging techniques such as visualizing intermediate results. Architecture-aware optimizations dramatically increase the performance of these programs, albeit by limiting abstraction. Alternatively, we can move abstractions above the pipeline to enable dedicated graphics hardware. APIs abstracting graphics processing units (GPUs) enable efficient rasterization implementations. We port our render to the programmable shading framework common to such APIs.
15.2 High-Level Design Overview
15.2.3 Ray Casting: Pixels First
15.2.4 Rasterization: Triangles First
15.4.2 Sampling Framework: Intersect and Shade
15.4.3 Ray-Triangle Intersection
15.6.2 Bounding-Box Optimization
15.6.3 Clipping to the Near Plane
15.6.6 Beyond the Bounding Box
15.7 Rendering with a Rasterization API
15.8 Performance and Optimization
15.8.1 Abstraction Considerations
15.8.2 Architectural Considerations
15.8.3 Early-Depth-Test Example
15.8.4 When Early Optimization Is Good
15.8.5 Improving the Asymptotic Bound
16 Survey of Real-Time 3D Graphics Platforms
There is great diversity in the feature sets and design goals among 3D graphics platforms. Some are thin layers that bring the application as close to the hardware as possible for optimum performance and control; others provide a thick layer of data structures for the storage and manipulation of complex scenes; and at the top of the power scale are the game-development environments that additionally provide advanced features like physics and joint/skin simulation. Platforms supporting games render with the highest possible speed to ensure interactivity, while those used by the special effects industry sacrifice speed for the utmost in image quality. We present a broad overview of modern 3D platforms with an emphasis on the design goals behind the variations.
16.1.1 Evolution from Fixed-Function to Programmable Rendering Pipeline
16.2 The Programmer’s Model: OpenGL Compatibility (Fixed-Function) Profile
16.2.1 OpenGL Program Structure
16.2.2 Initialization and the Main Loop
16.2.7 Putting It All Together—Part 1: Static Frame
16.2.8 Putting It All Together—Part 2: Dynamics
16.3 The Programmer’s Model: OpenGL Programmable Pipeline
16.3.1 Abstract View of a Programmable Pipeline
16.3.2 The Nature of the Core API
16.4 Architectures of Graphics Applications
16.4.2 The Application-Model-to-IM-Platform Pipeline (AMIP)
16.4.4 Graphics Application Platforms
17 Image Representation and Manipulation
Much of graphics produces images as output. We describe how images are stored, what information they can contain, and what they can represent, along with the importance of knowing the precise meaning of the pixels in an image file. We show how to composite images (i.e., blend, overlay, and otherwise merge them) using coverage maps, and how to simply represent images at multiple scales with MIP mapping.
17.2.1 The Information Stored in an Image
17.3.1 Choosing an Image Format
17.4.1 The Meaning of a Pixel During Image Compositing
17.4.3 Simplifying Compositing
17.4.4 Other Compositing Operations
17.4.5 Physical Units and Compositing
17.7 Discussion and Further Reading
18 Images and Signal Processing
The pattern of light arriving at a camera sensor can be thought of as a function defined on a 2D rectangle, the value at each point being the light energy density arriving there. The resultant image is an array of values, each one arrived at by some sort of averaging of the input function. The relationship between these two functions—one defined on a continuous 2D rectangle, the other defined on a rectangular grid of points—is a deep one. We study the relationship with the tools of Fourier analysis, which lets us understand what parts of the incoming signal can be accurately captured by the discrete signal. This understanding helps us avoid a wide range of image problems, including “jaggies” (ragged edges). It’s also the basis for understanding other phenomena in graphics, such as moiré patterns in textures.
18.1.2 Important Terms, Assumptions, and Notation
18.4 Properties of Convolution
18.5 Convolution-like Computations
18.9 Mathematical Considerations
18.9.1 Frequency-Based Synthesis and Analysis
18.10 The Fourier Transform: Definitions
18.11 The Fourier Transform of a Function on an Interval
18.11.1 Sampling and Band Limiting in an Interval
18.12 Generalizations to Larger Intervals and All of R
18.13 Examples of Fourier Transforms
18.13.2 The Transform of a Box Is a Sinc
18.13.3 An Example on an Interval
18.14 An Approximation of Sampling
18.15 Examples Involving Limits
18.15.1 Narrow Boxes and the Delta Function
18.15.2 The Comb Function and Its Transform
18.16 The Inverse Fourier Transform
18.17 Properties of the Fourier Transform
18.18.2 Explaining Replication in the Spectrum
18.19 Reconstruction and Band Limiting
18.21 Discussion and Further Reading
19 Enlarging and Shrinking Images
We apply the ideas of the previous two chapters to a concrete example—enlarging and shrinking of images—to illustrate their use in practice. We see that when an image, conventionally represented, is shrunk, problems will arise unless certain high-frequency information is removed before the shrinking process.
19.4 Making the Algorithms Practical
19.5 Finite-Support Approximations
19.5.1 Practical Band Limiting
19.6 Other Image Operations and Efficiency
19.7 Discussion and Further Reading
20 Textures and Texture Mapping
Texturing, and its variants, add visual richness to models without introducing geometric complexity. We discuss basic texturing and its implementation in software, and some of its variants, like bump mapping and displacement mapping, and the use of 1D and 3D textures. We also discuss the creation of texture correspondences (assigning texture coordinates to points on a mesh) and of the texture images themselves, through techniques as varied as “painting the model” and probabilistic texture-synthesis algorithms.
20.3 Building Tangent Vectors from a Parameterization
20.4 Codomains for Texture Maps
20.5 Assigning Texture Coordinates
20.7 Sampling, Aliasing, Filtering, and Reconstruction
20.8.3 Reaction-Diffusion Textures
20.9 Data-Driven Texture Synthesis
20.10 Discussion and Further Reading
Certain interaction techniques use a substantial amount of the mathematics of transformations, and therefore are more suitable for a book like ours than one that concentrates on the design of the interaction itself, and the human factors associated with that design. We illustrate these ideas with three 3D manipulators—the arcball, trackball, and Unicam—and with a a multitouch interface for manipulating images.
21.2 User Interfaces and Computer Graphics
21.2.2 Interaction Event Handling
21.3 Multitouch Interaction for 2D Manipulation
21.4 Mouse-Based Object Manipulation in 3D
21.4.1 The Trackball Interface
21.5 Mouse-Based Camera Manipulation: Unicam
21.6 Choosing the Best Interface
21.7.1 First-Person-Shooter Controls
21.7.2 3ds Max Transformation Widget
21.7.3 Photoshop’s Free-Transform Mode
21.7.6 Grabcut and Selection by Strokes
21.8 Discussion and Further Reading
22 Splines and Subdivision Curves
Splines are, informally, curves that pass through or near a sequence of “control points.” They’re used to describe shapes, and to control the motion of objects in animations, among other things. Splines make sense not only in the plane, but also in 3-space and in 1-space, where they provide a means of interpolating a sequence of values with various degrees of continuity. Splines, as a modeling tool in graphics, have been in part supplanted by subdivision curves (which we saw in the form of corner-cutting curves in Chapter 4) and subdivision surfaces. The two classes—splines and subdivision—are closely related. We demonstrate this for curves in this chapter; a similar approach works for surfaces.
22.3 Fitting a Curve Segment between Two Curves: The Hermite Curve
22.4 Gluing Together Curves and the Catmull-Rom Spline
22.4.1 Generalization of Catmull-Rom Splines
22.4.2 Applications of Catmull-Rom Splines
22.7 Discussion and Further Reading
23 Splines and Subdivision Surfaces
Spline surfaces and subdivision surfaces are natural generalizations of spline and subdivision curves. Surfaces are built from rectangular patches, and when these meet four at a vertex, the generalization is reasonably straightforward. At vertices where the degree is not four, certain challenges arise, and dealing with these “exceptional vertices” requires care. Just as in the case of curves, subdivision surfaces, away from exceptional vertices, turn out to be identical to spline surfaces. We discuss spline patches, Catmull-Clark subdivision, other subdivision approaches, and the problems of exceptional points.
23.3 Catmull-Clark Subdivision Surfaces
23.4 Modeling with Subdivision Surfaces
23.5 Discussion and Further Reading
24 Implicit Representations of Shape
Implicit curves are defined as the level set of some function on the plane; on a weather map, the isotherm lines constitute implicit curves. By choosing particular functions, we can make the shapes of these curves controllable. The same idea applies in space to define implicit surfaces. In each case, it’s not too difficult to convert an implicit representation to a mesh representation that approximates the surface. But the implicit representation itself has many advantages. Finding a ray-shape intersection with an implicit surface reduces to root finding, for instance, and it’s easy to combine implicit shapes with operators that result in new shapes without sharp corners.
24.4 Representing Implicit Functions
24.4.3 Mathematical Models and Sampled Implicit Representations
24.5 Other Representations of Implicit Functions
24.6 Conversion to Polyhedral Meshes
24.7 Conversion from Polyhedral Meshes to Implicits
24.8 Texturing Implicit Models
24.8.1 Modeling Transformations and Textures
24.9 Ray Tracing Implicit Surfaces
24.10 Implicit Shapes in Animation
24.11 Discussion and Further Reading
Meshes are a dominant structure in today’s graphics. They serve as approximations to smooth curves and surfaces, and much mathematics from the smooth category can be transferred to work with meshes. Certain special classes of meshes—heightfield meshes, and very regular meshes—support fast algorithms particularly well. We discuss level of detail in the context of meshes, where practical algorithms abound, but also in a larger context. We conclude with some applications.
25.2.1 Triangulated Surfaces and Surfaces with Boundary
25.2.2 Computing and Storing Adjacency
25.4.2 Other Mesh Simplification Approaches
25.5 Mesh Applications 1: Marching Cubes, Mesh Repair, and Mesh Improvement
25.5.1 Marching Cubes Variants
25.5.3 Differential or Laplacian Coordinates
25.5.4 An Application of Laplacian Coordinates
25.6 Mesh Applications 2: Deformation Transfer and Triangle-Order Optimization
25.6.2 Triangle Reordering for Hardware Efficiency
25.7 Discussion and Further Reading
We discuss the basic physics of light, starting from blackbody radiation, and the relevance of this physics to computer graphics. In particular, we discuss both the wave and particle descriptions of light, polarization effects, and diffraction. We then discuss the measurement of light, including the various units of measure, and the continuum assumption implicit in these measurements. We focus on the radiance, from which all other radiometric terms can be derived through integration, and which is constant along rays in empty space. Because of the dependence on integration, we discuss solid angles and integration over these. Because the radiance field in most scenes is too complex to express in simple algebraic terms, integrals of radiance are almost always computed stochastically, and so we introduce stochastic integration. Finally, we discuss reflectance and transmission, their measurement, and the challenges of computing integrals in which the integrands have substantial variation (like the specular and nonspecular parts of the reflection from a glossy surface).
26.4.3 Bending of Light at an Interface
26.5 Fresnel’s Law and Polarization
26.5.1 Radiance Computations and an “Unpolarized” Form of Fresnel’s Equations
26.6 Modeling Light as a Continuous Flow
26.6.1 A Brief Introduction to Probability Densities
26.6.3 Angles and Solid Angles
26.6.4 Computations with Solid Angles
26.6.5 An Important Change of Variables
26.7.3 Two Radiance Computations
26.7.6 Radiant Power or Radiant Flux
26.10.2 Mirrors, Glass, Reciprocity, and the BRDF
26.10.3 Writing L in Different Ways
26.11 Discussion and Further Reading
The appearance of an object made of some material is determined by the interaction of that material with the light in the scene. The interaction (for fairly homogeneous materials) is described by the reflection and transmission distribution functions, at least for at-the-surface scattering. We present several different models for these, ranging from the purely empirical to those incorporating various degrees of physical realism, and observe their limitations as well. We briefly discuss scattering from volumetric media like smoke and fog, and the kind of subsurface scattering that takes place in media like skin and milk. Anticipating our use of these material models in rendering, we also discuss the software interface a material model must support to be used effectively.
27.3.2 Types of Scattering Models
27.3.3 Physical Constraints on Scattering
27.5 Empirical and Phenomenological Models for Scattering
27.5.3 The Phong and Blinn-Phong Models
27.7 Physical Models for Specular and Diffuse Reflection
27.8 Physically Based Scattering Models
27.8.1 The Fresnel Equations, Revisited
27.8.2 The Torrance-Sparrow Model
27.8.3 The Cook-Torrance Model
27.11 Variations across Surfaces
27.12 Suitability for Human Use
27.14 Software Interface to Material Models
27.15 Discussion and Further Reading
While color appears to be a physical property—that book is blue, that sun is yellow—it is, in fact, a perceptual phenomenon, one that’s closely related to the spectral distribution of light, but by no means completely determined by it. We describe the perception of color and its relationship to the physiology of the eye. We introduce various systems for naming, representing, and selecting colors. We also discuss the perception of brightness, which is nonlinear as a function of light energy, and the consequences of this for the efficient representation of varying brightness levels, leading to the notion of gamma, an exponent used in compressing brightness data. We also discuss the gamuts (range of colors) of various devices, and the problems of color interpolation.
28.2 Spectral Distribution of Light
28.3 The Phenomenon of Color Perception and the Physiology of the Eye
28.4.1 The Perception of Brightness
28.6 Conventional Color Wisdom
28.6.2 Purple Isn’t a Real Color
28.6.3 Objects Have Colors; You Can Tell by Looking at Them in White Light
28.6.4 Blue and Green Make Cyan
28.7 Color Perception Strengths and Weaknesses
28.8 Standard Description of Colors
28.8.1 The CIE Description of Color
28.8.2 Applications of the Chromaticity Diagram
28.9.1 Variations and Miscellany
28.12 Encoding of Intensity, Exponents, and Gamma Correction
28.19 Using Color in Computer Graphics
28.20 Discussion and Further Reading
Using the formal descriptions of radiance and scattering, we derive the rendering equation, an integral equation characterizing the radiance field, given a description of the illumination, geometry, and materials in the scene.
29.2.1 The Rendering Equation, First Version
29.4 The Rendering Equation for General Scattering
29.4.1 The Measurement Equation
29.7 Solving the Rendering Equation
29.8 The Classification of Light-Transport Paths
29.8.1 Perceptually Significant Phenomena and Light Transport
30 Probability and Monte Carlo Integration
Probabilistic methods are at the heart of modern rendering techniques, especially methods for estimating integrals, because solving the rendering equation involves computing an integral that’s impossible to evaluate exactly in any but the simplest scenes. We review basic discrete probability, generalize to continuum probability, and use this to derive the single-sample estimate for an integral and the importance-weighted single-sample estimate, which we’ll use in the next two chapters.
30.3 Random Variables and Randomized Algorithms
30.3.1 Discrete Probability and Its Relationship to Programs
30.3.3 Properties of Expected Value, and Related Terms
30.3.5 Probability Density Functions
30.3.6 Application to the Sphere
30.3.8 Application to Scattering
30.4 Continuum Probability, Continued
30.5 Importance Sampling and Integration
30.7 Discussion and Further Reading
31 Computing Solutions to the Rendering Equation: Theoretical Approaches
The rendering equation can be approximately solved by many methods, including ray tracing (an approximation to the series solution), radiosity (an approximation arising from a finite-element approach), Metropolis light transport, and photon mapping, not to mention basic polygonal renderers using direct-lighting-plus-ambient approximations. Each method has strengths and weaknesses that can be analyzed by considering the nature of the materials in the scene, by examining different classes of light paths from luminaires to detectors, and by uncovering various kinds of approximation errors implicit in the methods.
31.2 Approximate Solutions of Equations
31.3 Method 1: Approximating the Equation
31.4 Method 2: Restricting the Domain
31.5 Method 3: Using Statistical Estimators
31.5.1 Summing a Series by Sampling and Estimation
31.8 The Rendering Equation, Revisited
31.9 What Do We Need to Compute?
31.10 The Discretization Approach: Radiosity
31.11 Separation of Transport Paths
31.12 Series Solution of the Rendering Equation
31.13 Alternative Formulations of Light Transport
31.14 Approximations of the Series Solution
31.15 Approximating Scattering: Spherical Harmonics
31.16 Introduction to Monte Carlo Approaches
31.18 Path Tracing and Markov Chains
31.18.1 The Markov Chain Approach
31.18.2 The Recursive Approach
31.18.3 Building a Path Tracer
31.18.4 Multiple Importance Sampling
31.18.5 Bidirectional Path Tracing
31.18.6 Metropolis Light Transport
31.19.1 Image-Space Photon Mapping
31.20 Discussion and Further Reading
We describe the implementation of a path tracer, which exhibits many of the complexities associated with ray-tracing-like renderers that attempt to estimate radiance by estimating integrals associated to the rendering equations, and a photon mapper, which quickly converges to a biased but consistent and plausible result.
32.3 Surface Representations and Representing BSDFs Locally
32.3.1 Mirrors and Point Lights
32.4.1 Representation of Luminaires
32.9 Discussion and Further Reading
On modern graphics cards, we can execute small (and not-so-small) programs that operate on model data to produce pictures. In the simplest form, these are vertex shaders and fragment shaders, the first of which can do processing based on the geometry of the scene (typically the vertex coordinates), and the second of which can process fragments, which correspond to pieces of polygons that will appear in a single pixel. To illustrate the more basic use of shaders we describe how to implement basic Phong shading, environment mapping, and a simple nonphotorealistic renderer.
33.2 The Graphics Pipeline in Several Forms
33.4 A Simple Graphics Program with Shaders
33.7 Two Versions of Toon Shading
33.9 Discussion and Further Reading
Expressive rendering is the name we give to renderings that do not aim for photorealism, but rather aim to produce imagery that communicates with the viewer, conveying what the creator finds important, and suppressing what’s unimportant. We summarize the theoretical foundations of expressive rendering, particularly various kinds of abstraction, and discuss the relationship of the “message” of a rendering and its style. We illustrate with a few expressive rendering techniques.
34.1.1 Examples of Expressive Rendering
34.1.2 Organization of This Chapter
34.2 The Challenges of Expressive Rendering
34.4 Perception and Salient Features
34.5 Geometric Curve Extraction
34.7 Discussion and Further Reading
An animation is a sequence of rendered frames that gives the perception of smooth motion when displayed quickly. The algorithms to control the underlying 3D object motion generally interpolate between key poses using splines, or simulate the laws of physics by numerically integrating velocity and acceleration. Whereas rendering primarily is concerned with surfaces, animation algorithms require a model with additional properties like articulation and mass. Yet these models still simplify the real world, accepting limitations to achieve computational efficiency. The hardest problems in animation involve artificial intelligence for planning realistic character motion, which is beyond the scope of this chapter.
35.2.1 A Walking Character (Key Poses)
35.2.2 Firing a Cannon (Simulation)
35.2.3 Navigating Corridors (Motion Planning)
35.3 Considerations for Rendering
35.3.4 Temporal Aliasing and Motion Blur
35.3.5 Exploiting Temporal Coherence
35.3.6 The Problem of the First Frame
35.3.7 The Burden of Temporal Coherence
35.4.2 Limiting Degrees of Freedom
35.6.2 Differential Equation Formulation
35.6.3 Piecewise-Constant Approximation
35.6.4 Models of Common Forces
35.6.6 Dynamics as a Differential Equation
35.6.7 Numerical Methods for ODEs
35.7 Remarks on Stability in Dynamics
Efficient determination of the subset of a scene that affects the final image is critical to the performance of a renderer. The first approximation of this process is conservative determination of surfaces visible to the eye. This problem has been addressed by algorithms with radically different space, quality, and time bounds. The preferred algorithms vary over time with the cost and performance of hardware architectures. Because analogous problems arise in collision detection, selection, global illumination, and document layout, even visibility algorithms that are currently out of favor for primary rays may be preferred in other applications.
36.1.1 The Visibility Function
36.1.4 Current Practice and Motivation
36.2.1 BSP Ray-Primitive Intersection
36.2.2 Parallel Evaluation of Ray Tests
36.3.1 Common Depth Buffer Encodings
36.4.1 The Painter’s Algorithm
36.4.2 The Depth-Sort Algorithm
36.5 Frustum Culling and Clipping
36.5.3 Clipping to the Whole Frustum
36.7 Hierarchical Occlusion Culling
36.8 Sector-based Conservative Visibility
36.9.1 Spatial Antialiasing (xy)
36.9.4 Coverage as a Material Property (α)
36.10 Discussion and Further Reading
Spatial data structures like bounding volume hierarchies provide intersection queries and set operations on geometry embedded in a metric space. Intersection queries are necessary for light transport, interaction, and dynamics simulation. These structures are classic data structures like hash tables, trees, and graphs extended with the constraints of 3D geometry.
37.2.2 Extracting Keys and Bounds
37.3 Characterizing Data Structures
37.4 Overview of kd Structures
37.6.1 Binary Space Partition (BSP) Trees
37.6.2 Building BSP Trees: oct tree, quad tree, BSP tree, kd tree
37.6.3 Bounding Volume Hierarchy
37.7.3 Selecting Grid Resolution
37.8 Discussion and Further Reading
We describe the structure of modern graphics cards, their design, and some of the engineering tradeoffs that influence this design.
38.3 Architecture and Implementation
38.6 Texture, Memory, and Latency
38.8 Organizational Alternatives
38.8.3 Larrabee: A CPU/GPU Hybrid