Appendix A

A Class Library for Global Illumination

Global illumination is all about generating paths connecting a virtual camera with a light source. In this appendix, we propose a library of software classes that will facilitate generating such paths in a computer program, by hiding the details of geometry and materials representation and ray casting from a higher-level algorithm implementation.

The library offers the following building blocks:

  • Classes for representing path nodes, such as a point on a light source, a surface scattering point, the viewing position, etc. (Section A.1).
  • Classes for light source sampling. These classes generate path nodes that serve as the head of light paths (Section A.2).
  • Support classes, representing a virtual screen buffer, classes for doing tone mapping, etc. (Section A.3).

The relationship between the classes is depicted in Figure A.1. Some example code fragments, illustrating the use of these classes, are presented in Section A.4.

Figure A.1

Figure showing graphical overview of the classes contained in the library described here.

Graphical overview of the classes contained in the library described here.

The interface we describe here does not include a representation of geometry or materials itself. Such a representation is, of course, required in an actual implementation. Our implementation, on top of a VRML-based scene graph management library, is available from this book’s website (http://www.advancedglobalillumination.com). In our experience, it is easy to port the class library to other global illumination platforms. Algorithms implemented on top of this interface may be portable to other global illumination systems supporting this interface almost without modifications. Our experiments have indicated that the additional computation cost caused by the interface is relatively small: on the order of 10% to 20% of the rendering time at most, even if the underlying scene graph management, shader implementation, and ray-tracing kernel are highly optimized. The programming language we used in our implementation is C++. The same interface can obviously also be realized using a different object-oriented programming language.

A.1 Path Node Classes

A.1.1 Overview

All the algorithms described in this book require that light paths or eye paths are generated stochastically. These paths have an associated value and a probability density (PDF). In order to form an image, average ratios are computed of path values over their PDFs.

The value associated with a path is always the product of values associated with the nodes in the path and transition factors such as vis(x, y) cos θy/r2xy between subsequent nodes x and y. The value associated with a path node depends on the type of node. For instance, for a surface scattering event, it is the BSDF times the outgoing cosine; for a light source node, it is the self-emitted radiance, etc.

The PDF indicates the chance that a particular path is being generated. It is also the product of PDFs associated with each node in the path and transition factors. The PDF associated with a surface scattering node, for instance, is the probability by which a scattered light direction is sampled; for a light source node, it is the probability of sampling a light emission direction on a light source.

We call every event at which a path is generated, or its trajectory changed, a path node. The library contains a representation for a variety of path nodes corresponding to:

  • Emission of an eye ray through a virtual screen pixel, that is, emission of potential: see EyeNode class (Section A.1.3).
  • Emission of light, at a surface or from the background (for instance, a model for sky illumination or a high dynamic range environment map): see EmissionNode class (Section A.1.4).
  • Scattering of light or potential at a surface, or light/potential disappearing into the background: see ScatteringNode class (Section A.1.5).

A full path corresponds to a list of such path nodes.

A.1.2 Common Interface: The PathNode Base Class

All path node classes inherit from a single PathNode base class. The PathNode class encapsulates the common properties of all path nodes and provides a uniform interface, so that complete algorithms can be implemented without having to know what types of path nodes may be generated. The main members of the PathNode class are:

  • The cumulative probability density by which a path up to a given node has been generated.
  • The cumulative value associated with the path up to a given node.
  • An eval() member function for querying the value (BSDF, EDF, etc.), path survival PDF, the PDF of sampling a given outgoing direction, and the outgoing cosine factor (if applicable) associated with the path node.
  • A sample() function that calculates from two random numbers whether or not a path at a node shall be expanded and, if so, in what direction.
  • A trace() function that returns a new path node resulting from tracing a ray into a given direction. The resulting node is always a scattering node (see Section A.1.5). Its precise type depends on the event that occurs next: If the ray hits a surface, a SurfaceNode is returned. If the ray disappears to the background, a BackgroundNode is returned. The trace() function also computes geometric factors associated with the transition to the new path node and properly initializes the cumulative PDF and value of the resulting path node.

eval(), sample(), and trace() are virtual member functions, implemented in children classes of PathNode. We choose to provide a single eval() function, for evaluating everything related to a path node, in order to minimize the number of virtual function calls and in order to make it easier to share the calculation of certain partial results between the value and the PDF. The latter can result in significant time savings. For instance, PDFs are quite often very similar to values. Results are filled in objects pointed to by pointers passed as parameters to the eval() function. If null pointers are passed, corresponding quantities (value, survival or direction sampling PDF, outgoing cosine) are not computed if not needed for other results. In the same spirit, the sample() and trace() functions can also return values and PDFs that are computed on the fly if nonnull pointer arguments are passed for filling in such side results. The trace() function optionally accepts a set of pointers to path node objects of each type that can be returned, in order to avoid dynamic storage allocation and to allow easy type checking afterwards. This will be illustrated in Section A.4.

Besides the above members, the PathNode base class also maintains and offers:

  • The depth of the path node in its path: 0 for the head of a path, depth of the parent node plus 1 for nonhead path nodes.
  • The light emission and scattering modes to take into account for evaluation and sampling (diffuse/glossy/specular emission/reflection/ refraction);
  • A pointer to the parent node in the path.
  • Various flags: whether the path node belongs to a light path or eye path (required for making certain corrections due to nonsymmetric light scattering [203]), whether the path node is at the end of a sub-path, whether it has a finite position in space, or whether it is located “at infinity” (for instance: background emission nodes).
  • Member functions for accessing the position of a path node in space, or the geometry at that location, or for obtaining the head of the path, or direction and distance (taken to be 1 for background nodes) to another path node, or for computing visibility with regard to another node.
  • Static member variables indicating the minimum and maximum path depth for generating paths. These values affect survival probabilities computed in the sample() and eval() functions.
  • Some more member functions for convenience: scatter() computes the radiance or potential accumulated along a path and scattered into a given direction. The expand() member function combines sample() and trace() in a single function.

A.1.3 Pixel Filtering and Sampling: The EyeNode Class

The EyeNode class represents the head of eye paths. The position of an EyeNode object is the position of the pinhole camera used to view a scene. EyeNode objects are associated with a virtual screen pixel. They encapsulate pixel filtering and sampling. The value returned by EyeNode::eval() is the pixel measurement function of a given direction (see Section 5.7.1). EyeNode::sample() will select a direction through the associated virtual screen pixel for shooting an eye ray. Currently, a simple box pixel filter is implemented.

A.1.4 Light Emission: The EmissionNode Classes

An EmissionNode object represents the head of a light path. It corresponds with a point on a surface light source (SurfaceEmissionNode subclass) or a direction towards the background for background illumination such as sky illumination or a high dynamic range environment map (BackgroundEmissionNode subclass). The value associated with an emission node is the self-emitted radiance into a given direction. The sample() member function will sample a direction according to the directional emission distribution at a surface emission location. For background emission nodes, where the emission direction is encoded in the node, sample() will select a point on the projection of the scene bounding box perpendicular to the emission direction. In both cases, sample() results in a point and a direction, enough for constructing a ray to shoot self-emitted radiance along.

Emission nodes can be generated by means of the EmissionSampler classes described in Section A.2.

A.1.5 Light and Potential Scattering: The ScatteringNode Classes

The trace() function of any path node usually results in a new ScatteringNode object representing surface scattering (SurfaceNode) or light or potential that disappears into the background (BackgroundNode).

Surface Scattering: SurfaceNode Class

The position of a SurfaceNode object is the position on the surface of an object in the scene at which a light path or eye path can be reflected, refracted, or absorbed. The value associated with such a node is the BSDF for a given direction. By default, the survival probability is computed based on the fraction of incident illumination or potential that will be scattered rather than absorbed. It depends on the direction of incidence and is, of course, affected by the currently required minimum and maximum path length. The “outgoing cosine” computed by SurfaceNode::eval() is the absolute value of the cosine between a given outgoing direction and the shading normal at the scattering location. The sample() member function samples an outgoing direction ideally according to the BSDF times the outgoing cosine. SurfaceNode objects know whether they belong to a light path or eye path, and appropriate correction factors for nonsymmetric scattering due to bump mapping or normal interpolation are applied on the BSDF [203]. There is also a version of SurfaceNode::eval() that allows us to specify incident directions other than the one for which the path node was constructed.

Occasionally, a path will hit a surface light source. In order to evaluate self-emitted radiance at a scattering location, and to compute the probability of obtaining the surface location by means of surface emission sampling (with a SurfaceEmissionSampler object, see Section A.2), appropriate source radiance() and source_pdf() member functions are provided. Some algorithms, like bidirectional path tracing, require more complex operations if a path hits a light source. A conversion from the SurfaceNode class to the SurfaceEmissionNode class is provided in order to meet such requirements. An on_light_source() member function returns whether or not a SurfaceNode lays on a light source.

Paths Disappearing into the Background: BackgroundNode Class

If a path doesn’t hit a surface, it’s said to disappear into the background. A special BackgroundNode marks the end of such paths. The BackgroundNode class inherits from the ScatteringNode base class, but of course, no scattering happens: A path that disappears into the background is always terminated. The value and PDFs returned by BackgroundNode::eval() are always zero, and the BackgroundNode::sample() member function will always result in an error. The trace() function returns a null result.

If background illumination has been modeled in a scene to be rendered, however, the BackgroundNode::source_radiance() and BackgroundNode::source pdf() member functions will compute the self-emitted radiance received from the background along the path direction, as well as the probability of sampling that direction using a BackgroundEmissionSampler object. Also for background “scattering,” a conversion from the class BackgroundNode to the class BackgroundEmissionNode is provided so all queries for self-emitted illumination can be performed at a background “scattering” node.

A.2 Light Source Sampling Classes

A scene can contain both a number of surfaces that emit light spontaneously, as well as a model for background illumination such as sky light or a high dynamic range environment map. A second set of classes provided by the library will select either a position on a light source surface (SurfaceEmissionSampler class) or a direction for background illumination (BackgroundEmissionSampler class). Unlike path node objects, which are very frequently created and destroyed during the global illumination computations, there is normally only a single surface and background emission sampler active while rendering a frame.

A.2.1 Surface Emission Sampling: The SurfaceEmissionSampler and WeightedSurfaceEmissionSampler Classes

A SurfaceEmissionSampler class object maintains a list (or better, an array) of light source surfaces in the scene. Our current implementation assumes scenes modeled out of triangles, so our SurfaceEmissionSamplers will contain a list of pointers to light-emitting triangles. It is straightforward to extend the interface to handle curved light sources, too. Besides member functions for building up such a list, the main member functions are:

  • A sample() function that will select a triangle from the list and return a point on the selected triangle as a SurfaceEmissionNode. Triangles are selected with a probability proportional to their self-emitted power. Points are selected uniformly on a triangle.
  • A pdf() member function returns the probability density of sampling a given point on a given triangle using sample().

The pdf() member function assumes an index mechanism for quickly locating a given triangle in the list of light source triangles. Our SurfaceEmissionNodes and SurfaceNodes contain a pointer to the surface triangle on which they are located. This allows us to find out easily whether a SurfaceNode is located on a light source, or to calculate all relevant light source quantities.

Weighted Surface Emission Sampling

Sometimes, surface emission sampling according to emitted power is not optimal, and other probabilities for selecting light source triangles are required. One example of such a case is view-importance–driven light source sampling (Section 5.4.5), when a light source needs to be selected according to its estimated impact on a particular view. A powerful, but distant or occluded light source for instance, receives a lower probability of being selected than a less powerful, but nearby, light source. The WeightedSurfaceEmissionSampler subclass of SurfaceEmissionSampler allows us to enable/disable light source triangles from a list and to attach weights to light source triangles in a very general way. For convenience, a member function is provided that will assign weights according to light source distance and orientation with regard to a specified point and normal. Our implementation also contains an adapted version of a light-path tracer that estimates the light flux each light source contributes to the current view and that assigns light source weights proportional to these fluxes eventually.

A.2.2 Background Emission Sampling: The BackgroundEmissionSampler Class

The BackgroundEmissionSampler class works in a very similar way to the SurfaceEmissionSampler class, except that usually, the number of background light sources is small, and it returns a sampled direction to the background in the form of a BackgroundEmissionNode. Background directions are selected with a probability that reflects the intensity of self-emitted radiance received from the direction. It is much harder to take into account surface orientation here so there is no class for weighted background emission sampling.

A.2.3 The EmissionSampler Wrapper Class

The library provides an EmissionSampler wrapper class that contains a pointer to a WeightedSurfaceEmissionSampler and to a BackgroundEmissionSampler for the scene. By default, surface emission sampling and background emission sampling receive a weight proportional to the total emitted power from surfaces and the background, respectively. In order to calculate these weights, it is necessary to know in what length units a scene has been modeled. The default weights can, of course, be modified in a program. Our adapted light tracer, described above, does so after measuring the light flux contributed to the current view by surfaces and background.

The public implementation provides only triangle light sources and background emission. Other light sources, such as spherical or disc light sources, can easily be added in the form of additional emission sampler classes. The EmissionSampler wrapper class shall contain a reference to all light source samplers, with proper weights, so that it can hide the variety of light sources in a scene from the implementation of global illumination algorithms by providing a single sample() function for any kind of light emission.

A.3 Support Classes

The path node and sampler class interfaces are pretty much self-contained, but they need to be embedded in a suitable working environment, of course. For convenience, the library also contains a number of additional classes providing such an environment. Unlike the path node class interface, it is likely that some tuning will be needed in order to integrate these support classes into your global illumination application.

A.3.1 A Pinhole Camera Virtual Screen Abstraction: The ScreenBuffer Class

EyeNode class objects correspond to pixels on a virtual screen. Their implementation requires an abstraction of a virtual screen buffer. The library provides a ScreenBuffer class for this purpose. The ScreenBuffer class represents the virtual screen of a pinhole camera. It offers member functions getDirection() and getPixelCoord() for mapping pixel coordinates to the corresponding primary ray direction and vice versa. A member function setView() initializes the current view point, focus point, direction point upwards, and field of view angle in the same way as the gluLookAt() function in OpenGL. The getPixelCoord() function returns whether or not a primary ray direction points towards the screen. It is used in light tracing and bidirectional path tracing in order to splat path contributions to the screen, as shown in the examples (Section A.4.1).

The ScreenBuffer class also maintains two arrays of pixel color values: one usual set of low dynamic range RGB triplets plus transparency that can be displayed efficiently using, for instance, the glDrawPixels() OpenGL function; and one set that contains high dynamic range color values in 32-bit packed RGBE format [220]. The ScreenBuffer class offers member functions clear(), clearRGBA(), clearHDR(), setPixel(), getRGBAPixel(), getHDRPixel(), addPixel(), etc., for clearing, querying, and modifying low and high dynamic range pixel color values.

A.3.2 Converting High to Low Dynamic Range Color Values: The ToneMapper Classes

A global illumination algorithm computes and stores high dynamic range pixel color values in the ScreenBuffer. A ToneMapper object will map the high dynamic range pixels to RGB color triplets for display as explained in Section 8.2. Different tone mapping algorithms are implemented in subclasses of a base ToneMapper class. Such classes maintain their own set of required parameters, such as the world adaptation luminance in the current view. The ScreenBuffer class provides a member function adaptation_luminance() for computing the world adaptation luminance as the exponentiated mean logarithmic luminance of the virtual screen high dynamic range pixel color values. The main member function provided by the ToneMapper classes is a map() function that does everything to convert the high dynamic range color values in a given ScreenBuffer object into low dynamic range color values for display.

A.3.3 Integration into an Application: The Browser and Tracer Classes

The library described here comes with an application in which several global illumination algorithms have been implemented. We describe here two additional classes that integrate the path node and sampler classes into this application.

The Browser Classes

We implemented a Browser base class to group and maintain the whole software environment in which the PathNode and EmissionSampler classes operate:

  • The scene graph. In our implementation, the scene graph is a VRML97 scene graph with numerous extension nodes for representing physically based appearance and high dynamic range backgrounds as well as color calibration parameters of the computer monitor on which a model has been designed.
  • The interface to a ray-tracing engine needed for finding ray-object intersections and for performing visibility queries.
  • One instance of an EmissionSampler, containing a WeightedSurface-EmissionSampler and a BackgroundEmissionSampler, as well as a reference unweighted SurfaceEmissionSampler.
  • A ScreenBuffer and a ToneMapper object.

The Browser base class does not support a graphical user interface, and neither does it perform any global illumination computations itself. It needs to be augmented with such capabilities by means of inheritance. The Browser base class provides a virtual trace() member function, which needs to be implemented in a child class in order to:

  • Initialize the ScreenBuffer for the current view.
  • Perform the real global illumination computations for the view.
  • Call the ToneMapper in order to map computed high dynamic range pixel colors into low dynamic range RGB color triplets for display.
  • Display the results on a computer screen, or save them into a file.

The Tracer Classes

Rather than implementing each global illumination algorithm as a separate Browser subclass, we introduced yet another class, called Tracer, providing a common software interface for global illumination algorithms. Algorithms such as path tracing and light tracing (Chapter 5), bidirectional path tracing (Section 7.3), a ray-traced version of the instant radiosity algorithm (Section 7.7), and photon mapping (Sectoin 7.6) are implemented in PathTracer, LightTracer, BiDirTracer, InstantRadiosity, and PhotonMapper child classes of the Tracer base class. The main functions implemented by these classes are:

  • An init() function performs initializations such as storage allocation of large arrays for each frame to be rendered.
  • A trace() function computes an image for the current view.
  • A tonemap() function properly rescales ScreenBuffer high dynamic range pixels and uses the current Browser’s ToneMapper object in order to convert to displayable RGB color triplets.

Our Browser subclass object creates an appropriate Tracer object according to the desires of a user and calls the above listed Tracer functions in its Browser::trace() handler.

In addition to the above functions, our Tracer classes also provide member function for distributed computations, for instance, indicating how to separate an image into several subimages to be computed on different network clients, and how to merge the resulting pixel values computed by each client afterwards.

A.4 Example Code Fragments

In this section, we provide some example code fragments, illustrating how global illumination algorithms can be implemented on top of the path node and sampler classes described previously.

A.4.1 A Light Tracer

We first present the core part of our LightTracer class, implementing light particle tracing (see Section 5.7):

// scrn is pointer to the current ScreenBuffer object
// class Vec3 and class Spectrum represent 3D vectors and spectra
// lightsampler is pointer to current EmissionSampler object
int nrparticles;  // nr of particles to trace

// splats particle on the screen
inline void LightTracer::splat(class PathNode *n)
{
 float dist;     // distance between eye and n
 const Vec3 eyedir = scrn->eye.dirto(n->pos(), &dist); // direction if (n->at_infinity()) dist = 1.;  // don’t divide by square distance

 float i, j;     // compute pixel coordinates (i,j)
 if (scrn->getPixelCoord(eyedir, &i, &j))  {
 class EyeNode e(i, j);  // eye node corresponding to pixel
 if (visible(&e, n)) { // n is not occluded from the eye
   float ncos, ecos;  // cosine factors at the eye
      // and at n
   scrn->addPixel(i, j, e.scatter(eyedir, &ecos)
   * n->scatter(-eyedir, &ncos)
   * (ncos * ecos / (dist*dist * (float)nrparticles)));
  }
 }
}

inline void LightTracer::traceparticle(class PathNode *l)
{
  splat(l);  // splat particle on screen
  class PathNode *n = l->expand();  // expand path
  if (n) traceparticle(n);   // recurse
  delete n;
}

void LightTracer::trace(void)
{
 for (int i=0; i<nrparticles; i++) {
 class EmissionNode *l = lightsampler->sample(); // sample lights
 if (l) traceparticle(l);  // trace light path
 delete l;
}
}

In order to implement photon mapping, the splat() function shall be modified in order to store SurfaceNode hit points n->pos(), incident direction n->indir, and flux n->value/n->pdf in a photon map data structure. A ready-to-use implementation of a photon map data structure can be found in Jensen’s book [83].

A.4.2 A Path Tracer

The implementation of a path Tracer below is only slightly more complicated, in order to avoid dynamic storage allocation and to obtain easy checking of path node types returned by the PathNode::expand() and EmissionSampler::sample() functions.

// Again, scrn and lightsampler are the current ScreenBuffer
// and EmissionSampler.

// Array of SurfaceNodes in order to avoid the need for
// dynamic storage allocation in PathNode::expand().
// Storage is allocated in setup(), and freed in cleanup().
class SurfaceNode* PathTracer::sbuf =0;

// nr of light samples (shadow rays) at each path surface hit
int PathTracer::nrlightsamples = 1;

// Compute score associated with path landing on a light source.
inline const Spectrum PathTracer::source(class ScatteringNode* s)
{
 class Spectrum score(0.);
 if (s->depth() <= 1 || nrlightsamples == 0) {
 // Source contribution computed exclusively by means of
 // scattering.
 score = s->source_radiance() * s->value / s->pdf;
} else {
 // Source contribution computed exclusively by means of
 // light source sampling.
  }
 return score;
}

// Light source sampling for computing direct illumination at
// the SurfaceNode s.
inline const Spectrum PathTracer::tracelight(class SurfaceNode* s)
{
 // Avoid dynamic storage allocation
 static class SurfaceEmissionNode sl;
 static class BackgroundEmissionNode bl;
 class EmissionNode *l = lightsampler->sample(&sl, &bl);
 if (l) {
 // cosine/distance at the light and at the surface
 float lcos, scos, dist;
 // dir/dist surface to light
 const Vec3 dir = s->dirto(l, &dist);
 // compute cosine at the light
 l->eval(-dir, 0, 0, 0, &lcos);
 // surface behind light or occluded
 if (lcos <= 0 || !visible(s, l))
  return Spectrum(0.);
 else
  return s->scatter(dir, &scos) * l->scatter(-dir) * (scos * lcos / (dist * dist));
 }
  return Spectrum(0.);
}

// Light source sampling at surface scattering node s.
inline const Spectrum PathTracer::tracelights(class SurfaceNode* s)
{
 class Spectrum score(0.);
 if (nrlightsamples > 0) {
 for (int i=0; i<nrlightsamples; i++) {// shoot shadow rays
  score += tracelight(s);
}
 score /= (float)nrlightsamples;
}
 return score;
}

// Traces a path through the pixel represented by the EyeNode e inline const Spectrum PathTracer::tracepixel(class EyeNode* e)
{
  static class BackgroundNode b;  // avoid dynamic storage allocation
  class SurfaceNode *s = sbuf;
  // sample + shoot eye ray
  class ScatteringNode *n = e->expand(s, &b);
  class Spectrum score(0.);
  while (n) {
 score += source(n);  // self-emitted illumination
  if (n == s)  // direct illumination: only surface nodes
   score += tracelights(s);
  n = n->expand(++s, &b); // indirect illumination: expand path
}
  return score;
}
void PathTracer::setup(void)
{
  sbuf = new SurfaceNode [PathNode::max_eye_path_depth];
}
void PathTracer::cleanup(void)
{
  delete [] sbuf;
}

// computes image for current view
void PathTracer::trace(void)
{
  setup();
  for (int j=0; j<scrn->height; j++) {
  for (int i=0; i<scrn->width; i++) {
  class EyeNode e(i, j);
  scrn->addPixel(i, j, tracepixel(&e));
 }
 }
  cleanup();
}

A.4.3 Multiple Importance Light Source Sampling

When light reflection at a surface hit by a path is highly specular, it is usually much better to compute direct illumination by means of a scattered ray rather than by light source sampling. We show here the modifications to the path Tracer implementation above, in order to calculate direct illumination at path nodes by means of multiple importance sampling [201]. These modifications illustrate the use of the PathNode::eval() functions in cases where the higher-level PathNode::scatter() functions fall short. Some example results are shown in Figure A.2 on page 332.

Figure A.2

Figure showing multiple importance light source sampling results obtained with the implementation shown in this section: The spheres on top are diffuse. They become more and more mirror-like towards the bottom. The left column of pictures was generated using BSDF sampling only. BSDF sampling works well for specular-like surfaces. The middle column shows results obtained with light sampling only. Light sampling is at its best for diffuse surfaces. The right column shows that combining BSDF sampling and light sampling using multiple importance sampling [201] yields better results overall. (See Plate XVII.)

Multiple importance light source sampling results obtained with the implementation shown in this section: The spheres on top are diffuse. They become more and more mirror-like towards the bottom. The left column of pictures was generated using BSDF sampling only. BSDF sampling works well for specular-like surfaces. The middle column shows results obtained with light sampling only. Light sampling is at its best for diffuse surfaces. The right column shows that combining BSDF sampling and light sampling using multiple importance sampling [201] yields better results overall. (See Plate XVII.)

// flag indicating whether or not to use bidirectional weighting
// for source contributions.
bool PathTracer::bidir_weighting = true;

// Compute score associated with path landing on a light source.
inline const Spectrum PathTracer::source(class ScatteringNode* s)
{
class Spectrum score(0.);
if (s->depth() <= 1 || nrlightsamples == 0) {
  // Source contributions computed exclusively by means of
  // scattering.
  score = s->source_radiance() * s->value / s->pdf;
} else if (bidir_weighting) {
  // Source contributions computed by means of both scattering
  // and light source sampling.
  // Attenuate source radiance taking into account the probability
  // that s would have been obtained by light source sampling
  // rather than scattering.
  float w_scattering = s->pdf / s->parent()->pdf;
  float w_lsampling = s->source_pdf() * (float)nrlightsamples;
  float w = w_scattering / (w_scattering + w_lsampling);
  score = s->source_radiance() * s->value * (w / s->pdf);
} else {
 // Source contributions computed exclusively by means of
 // light source sampling.
}
 return score;
}

// Light source sampling for computing direct illumination at
// the SurfaceNode s.
inline const Spectrum PathTracer::tracelight(class SurfaceNode* s)
{
 // Avoid dynamic storage allocation
 static class SurfaceEmissionNode sl;
 static class BackgroundEmissionNode bl;
 class EmissionNode *l = lightsampler->sample(&sl, &bl);
 if (l) {
 // cosine/distance at the light and at the surface
 float lcos, scos, dist;
 const Vec3 dir = s->dirto(l, &dist);
 // compute cosine at the light
 l->eval(-dir, 0, 0, 0, &lcos);
 // surface behind light or occluded
 if (lcos <= 0 || !visible(s, l))
  return Spectrum(0.);

 if (!bidir_weighting) {
   // source() doesn’t pick up source radiance at hit surfaces
   return s->scatter(dir, &scos) * l->scatter(-dir)
   * (scos * lcos / (dist * dist));
}

 else {
  // Attenuate direct illumination taking into account the
  // probability that the light source could have been hit
  // by a scattered ray.
  float survpdf, scatpdf;  // survival and scattering pdf
  class Spectrum fr, Le;  // BRDF at s and EDF at l
  s->eval(dir, &fr, &survpdf, &scatpdf, &scos);
  l->eval(-dir, &Le, 0, 0, 0);
  float g = lcos / (dist*dist); // transition factor
  float w_scattering = survpdf * scatpdf * g; // scatt. weight
  float w_lsampling = l->pdf * (float)nrlightsamples;
  float w = w_lsampling / (w_lsampling + w_scattering);
  float G = scos * g;
  return (s->value * fr * Le) * (G * w / (s->pdf * l->pdf));
 }
}
 return Spectrum(0.);
}
// The tracelights(), tracepixel() and trace() functions
// are the same as in the previous section.

A.4.4 A Bidirectional Path Tracer

Here is our code for a bidirectional path tracer:

// The purpose of the following arrays is to prevent dynamic
// storage allocation in PathNode::expand() and to allow
// efficient PathNode child class checking by comparing pointers.
class EyeNode eyenode;   // head of eye path
class SurfaceEmissionNode senode;  // surface emisison node
class BackgroundEmissionNode benode;// background emission node
// head of light path: pointer to senode or benode:
class EmissionNode *lightnode;
// surface scattering nodes
class SurfaceNode *eyesurfnodes, *lightsurfnodes;
class BackgroundNode eyebkgnode, lightbkgnode; // background nodes
// pointers surface or background scattering nodes:
class ScatteringNode **eyescatnodes, **lightscatnodes;

int eyepathlen, lightpathlen;   // eye/light path length
class PathNode **eyepath, **lightpath; // pointers to path nodes
float *erdpdf, *lrdpdf; // reverse dir. selection probabilities
float *erspdf, *lrspdf; // survival prob. in reverse path direction
float *erhpdf, *lrhpdf; // hit densities in reverse path direction
float nrparticles;  // nr of light particles traced

// for avoiding dynamic storage allocation when converting
// scattering nodes to emission nodes.
class BackgroundEmissionNode eeb;
class SurfaceEmissionNode ees;

// minimum and maximum light/eye/combined path length
static int min_light_path_length=2,
 max_light_path_length=7,
 min_eye_path_length=2,
 max_eye_path_length=7,
 max_combined_path_length=7;

// trace an eye path by expanding the eye node e.
// . a pointer to the eye node goes into eyepath[0]
// . the surface scattering nodes come into eyesurfnodes[1] etc... and
// a pointer to them in eyepath[1] and eyescatnode[1], etc...
// . the final background node goes into eyebkgnode and a pointer to
// it in eyepath[.] and eyescatnode[.] as well.
// Returns length of the eye path (nr of segments = nr of nodes - 1)
int BiDirTracer::trace_eye_path(class EyeNode* e)
{
 eyepath[0] = e;    // store pointer to head of path

 int i=1;
 class ScatteringNode *n = e->expand(&eyesurfnodes[i], &eyebkgnode);
 while (n) {
 eyescatnodes[i] = n;    // store ScatteringNode pointer
 eyepath[i] = n;     // store PathNode pointer
 i++;  //expand the path
 n = n->expand(&eyesurfnodes[i], &eyebkgnode);
}

  return i-1;      // path length (nr of segments)
}

// Same as trace_eye_path, but for light path starting at the
// emission node l. Results go into lightpath[.], lightscatnodes[.],
// lightsurfnodes[.], and lightbkgnode.
// Returns length of light path.
int BiDirTracer::trace_light_path(class EmissionNode* l)
{
 lightpath[0] = l;

 int i=1;
 class ScatteringNode *n = l->expand(&lightsurfnodes[i], &lightbkgnode);
 while (n) {
 lightscatnodes[i] = n;
 lightpath[i] = n;
 i++;
 n = n->expand(&lightsurfnodes[i], &lightbkgnode);
}

 return i-1;
}

// Computes the probabilities of sampling the eye path in reverse direction,
// that is: with incident and outgoing direction at the nodes exchanged.
// Result goes into:
// . erdpdf[i]: _D_irection sampling pdf for reverse directions at node i
// . erspdf[i]: unconstrained _S_urvival probability at node i (that is:
// not taking into account minimum and maximum required path length)
// . erhpdf[i]: cos / distance squared from node i to node i-1 (_H_it pdf)
// The leading ‘e’ in the names of the arrays stands for _E_ye path. The
// ‘r’ for _R_everse.
void BiDirTracer::compute_reverse_eyepath_probs(void)
{
 erdpdf[0] = erspdf[0] = erhpdf[0] = 0.; // no reverse tracing at the eye
 if (eyepathlen == 0)
 return;
 class ScatteringNode* next = eyescatnodes[1];
 erhpdf[1] = 0.; // chance of hitting eye point is 0 for pinhole camera
 for (int i=1; i<eyepathlen; i++) {
 class ScatteringNode* cur = next;
 next = eyescatnodes[i+1];
 class Vec3 toprevdir(cur->indir);
 class Vec3 tonextdir(-next->indir);
 erspdf[i] = cur->unconstrained_survival_probability(tonextdir);
 cur->eval(tonextdir, toprevdir, 0, 0, &erdpdf[i], 0);

 cur->eval(tonextdir, 0, 0, 0, &erhpdf[i+1]);
 if (!next->at_infinity())
  erhpdf[i+1] /= cur->position.sqdistance(next->position);
 }
  erspdf[eyepathlen] = erdpdf[eyepathlen] = 1.; // not needed
}
// Same for the light path.
void BiDirTracer::compute_reverse_lightpath_probs(void)
{
 // no reverse tracing at the light source
  lrdpdf[0] = lrspdf[0] = lrhpdf[0] = 0.;
 if (lightpathlen == 0)
  return;
  class ScatteringNode* next = lightscatnodes[1];
  lightnode->eval(-next->indir, 0, 0, 0, &lrhpdf[1]);
  if (!lightnode->at_infinity() && !next->at_infinity())
  lrhpdf[1] /= lightnode->pos().sqdistance(next->position);
  for (int i=1; i<lightpathlen; i++) {
  class ScatteringNode* cur = next;
  next = lightscatnodes[i+1];
  class Vec3 toprevdir(cur->indir);
  class Vec3 tonextdir(-next->indir);
  lrspdf[i] = cur->unconstrained_survival_probability(tonextdir);

  cur->eval(tonextdir, toprevdir, 0, 0, &lrdpdf[i], 0);
  cur->eval(tonextdir, 0, 0, 0, &lrhpdf[i+1]);
  if (!next->at_infinity())
  lrhpdf[i+1] /= cur->position.sqdistance(next->position);
 }
  lrspdf[lightpathlen] = lrdpdf[lightpathlen] = 1.; // not needed
}

// #define WEIGHT(w) (w)  // balance heuristic
#define WEIGHT(w) (w*w)  // power 2 heuristic
// Computes weight associated with the combined eye sub path up to
// eyepath[e] and light sub path up to lightpath[l].
// Requires that e>=0 and l>=0. Weighting for e==-1 or l==-1
// (empty sub-path) is special because there is no connecting path
// segment (nor visibility test), see eyepath_on_light() and
// lightpath_on_camera().
float BiDirTracer::weight(int e, int l,
  const Vec3& ltoedir, float ltoepdf, float etolpdf)
{
  class PathNode* en = eyepath[e];
  class PathNode* ln = lightpath[l];

  // weight of “this” strategy is proportional to product of
  // the pdfs of sampling the connected eye and light sub paths.
  // If e<=0, we are dealing with pure light path tracing (see
  // join_lightpath_with_eye() and an additional multiplication by
  // the total nr of light paths being traced is needed (= nr of pixels
  // since we trace one light path per pixel).
  double lpdf = ln->pdf * (e<=0 ? nrparticles : 1.);
  double epdf = en->pdf;
  double thisw = WEIGHT(lpdf * epdf);

  // compute sum of weights associated with all possible combinations
  // of shorter/longer eye/light sub-paths leading to the same path between
  // lightpath[0] and eyepath[0].
  double sumw = thisw;  // sum of weights
  int i, j;

  // shorter eye sub-paths / longer light sub-paths
  i = e; j = l;
  lpdf = ln->pdf; // prolonged light sub-path pdf
  while (i>=0 && j<PathNode::max_light_path_depth) {
 double lxpdf = 0.;  // light path transition pdf
 if (j == l) {
  // transition probability for light path at lightpath[l]
  // going towards eyepath[e]. Probability is given as an
  // argument to this function.
  // i == e
  lxpdf = ltoepdf;
 }  else if (j == l+1) {
  // evaluate transition probability for light path arriving at
  // eyepath[e] from lightpath[l] and going towards eyepath[e-1].
  // i == e-1
  class ScatteringNode* escat = eyescatnodes[e];
  float spdf = j < PathNode::min_light_path_depth // survival pdf
  ? 1.
  : escat->unconstrained_survival_probability(-ltoedir);
  float dpdf; //direction selection pdf
   escat->eval(-ltoedir, escat->indir, 0, 0, &dpdf, 0);
   lxpdf = spdf * dpdf * erhpdf[e]; // third factor is cosine/dist^2
} else {
   // transition probability for light path at eyepath[i+1]
   // from eyepath[i+2] and going towards eyepath[i]. Use
   // precomputed probabilities for reverse eye path).
   // i<e-1
  float spdf = j < PathNode::min_light_path_depth
   ? 1.
   : erspdf[i+1];
  lxpdf = spdf * erdpdf[i+1] * erhpdf[i+1];
 }
  lpdf *= lxpdf;
  // The light sub-path now ends at eyepath[i]. Consider connection
  // with eye sub-path ending at eyepath[i-1].
  i--; j++;
  double w = (i>=0) ? eyepath[i]->pdf * lpdf : lpdf;
  if (i<=0) w *= nrparticles; // pure light path tracing case
  sumw += WEIGHT(w);
}

 // shorter light sub-paths / longer eye sub-paths
 i = e; j = l;
 epdf = en->pdf;  // prolonged eye sub-path pdf
 while (j>=0 && i<PathNode::max_eye_path_depth) {
  double expdf = 0.;  // eye path transition pdf
  if (i == e) {
 // transition probability for eye path at eyepath[e]
 // going towards lightpath[l]
 // j == l
 xpdf = etolpdf;
 } else if (i == e+1) {
 // evaluate transition probability for eye path arriving at
 // lightpath[l] from eyepath[e] and going towards lightpath[l-1]
 // j == l-1
 class ScatteringNode* lscat = lightscatnodes[l];
 float spdf = i < PathNode::min_eye_path_depth
  ? 1.
  : lscat->unconstrained_survival_probability(ltoedir);
 float dpdf;
 lscat->eval(ltoedir, lscat->indir, 0, 0, &dpdf, 0);
 expdf = spdf * dpdf * lrhpdf[l];
} else {
  // transition probability for eye path at lightpath[j+1]
  // from lightpath[j+2] and going towards lightpath[j]. Use
  // precomputed probabilities for reverse light path.
  // j < l-1
  float spdf = i < PathNode::min_eye_path_depth
   ? 1.
   : lrspdf[j+1];
  expdf = spdf * lrdpdf[j+1] * lrhpdf[j+1];
}
 epdf *= expdf;
 // The eye sub-path now ends at lightpath[j]. Consider connection
 // with light sub-path ending at lightpath[j-1].
 j--; i++;
  double w = (j>=0) ? lightpath[j]->pdf * epdf : epdf;
  sumw += WEIGHT(w);
}
 return thisw / sumw;
}

// e==0 and l==0: join eye node with light source node
// (adds self-emitted radiance from a light source node to the
// image). This is handled by eyepath_on_light() for eye nodes
// of depth 1.
const Spectrum BiDirTracer::join_light_eye(void)
{
  return Spectrum(0.);
}

const EmissionNode* BiDirTracer::convert_to_lightnode(int e) {
  if (eyescatnodes[e] == &eyebkgnode) {
 // scattering node is background node
 // convert to background emission node
  eeb = BackgroundEmissionNode(eyebkgnode);
  return &eeb;
 } else {
  // scattering node is surface node
  // convert to surface emission node
  ees = SurfaceEmissionNode(eyesurfnodes[e]);
  return &ees;
 }
}

// e>0 && l==-1: eye path arriving on a light source (that is:
// we check for every surface hit, whether it is a light source
// or not and take its self-emitted radiance into the incident
// direction into account if it is a light source.)
const Spectrum BiDirTracer::eyepath_on_light(const int e)
{
  class ScatteringNode* es = eyescatnodes[e];
  if (!es->on_light_source()) {
  return Spectrum(0.);
 }

  if (e==1) {
 // this is the complementary strategy of join_light_eye(), but
 // join_light_eye() does nothing, so this strategy gets full weight.
  return es->source_radiance() * es->value / es->pdf;
}

// Convert the scattering node into a corresponding emission node
const EmissionNode* ee = convert_to_lightnode(e);
class Spectrum Le; // self-emitted radiance
float spdf, dpdf;  // light path survival and direction s. pdf
ee->eval(es->indir, &Le, &spdf, &dpdf, 0);

// Compute weight of this strategy
double thisw = WEIGHT(es->pdf);  // pdf of the eye path
// Compute sum of weights of all equivalent strategies. This is
// different from the other cases, because there is no connecting
// path segment here (for the same reason, there’s no
// additional visibility test for this strategy.)
double sumw = thisw;
int i=e, j=0;
double lpdf = ee->pdf; // pdf of the same position using emission sampling
while (i>=0 && j<PathNode::max_light_path_depth) {
 double lxpdf = 0.;
 if (j==0) {
 lxpdf = spdf * dpdf * erhpdf[e];
} else {
  double spdf = j<PathNode::min_light_path_depth
 ? 1.
 : erspdf[i];
 lxpdf = spdf * erdpdf[i] * erhpdf[i];
}
 i--; j++;
 double w = (i>=0) ? eyepath[i]->pdf * lpdf : lpdf;
 if (i<=0) w *= nrparticles;
 sumw += WEIGHT(w);
 lpdf *= lxpdf;
}

  return Le * es->value * (thisw / (es->pdf * sumw));
}

// e>0, l==0: join eye path vertex e>0 with light source node
// = standard path tracing
const Spectrum BiDirTracer::join_eyepath_with_light(const int e)
{
  if (eyescatnodes[e] == &eyebkgnode ||
  !visible(eyescatnodes[e], lightnode))
  return Spectrum(0.);

class SurfaceNode *en = &eyesurfnodes[e];
  class EmissionNode *ln = lightnode;
  float ecos, lcos, espdf, lspdf, edpdf, ldpdf, dist;
  class Spectrum efr, Le;
  const Vec3 ltoedir = ln->dirto(en, &dist);
  en->eval(-ltoedir, &efr, &espdf, &edpdf, &ecos);
  ln->eval(ltoedir, &Le, &lspdf, &ldpdf, &lcos);
  double invdist2 = 1. / (dist * dist);
  float etolpdf = espdf * edpdf * lcos * invdist2;
  float ltoepdf = lspdf * ldpdf * ecos * invdist2;
  double G = ecos * lcos * invdist2;
  float w = weight(e, 0, ltoedir, ltoepdf, etolpdf);
  return en->value * efr * Le * (G / (en->pdf * ln->pdf) * w);
}

// e==-1, l>0: corresponds with a light path node arriving on the
// surface of the camera. Since we are using a pinhole camera,
// this can not happen.
const Spectrum BiDirTracer::lightpath_on_camera(const int l)
{
  return Spectrum(0.);
}

// e==0, l>0: Join light path vertex with eye node
// = standard light particle tracing
// Score contributes to different pixel than the one through
// which the eye path was traced. Therefore we add the score
// directly to the screen buffer and we return a null spectrum here.
const Spectrum BiDirTracer::join_lightpath_with_eye(const int l)
{
  if (lightscatnodes[l] == &lightbkgnode)
  return Spectrum(0.);
  // find pixel through which the light path node is visible.
  class SurfaceNode* ln = &lightsurfnodes[l];
  double dist;
  class Vec3 ltoedir = ln->position.dirto(scrn->eye, &dist);

  float i, j;
  if (!scrn->getPixelCoord(-ltoedir, &i, &j) ||
  !visible(&eyenode, ln))
 return Spectrum(0.);

  class EyeNode e(i, j);  // EyeNode for pixel
  class Spectrum We;   // pixel measurement value
  float espdf, edpdf, ecos, lcos; // path survival/dir.sel. pdf and cos.
  e.eval(-ltoedir, &We, &espdf, &edpdf, &ecos);
  class Spectrum score = ln->scatter(ltoedir, &lcos);
  float invdist2 = 1./(dist*dist); // inverse square distance
  score *= We * (ecos * lcos * invdist2);
  float etolpdf = espdf * edpdf * lcos * invdist2;
  float ltoepdf = 0.; // no chance of hitting eye point (pinhole cam)
  float w = weight(0, l, ltoedir, ltoepdf, etolpdf);

  scrn->addPixel(i, j, score * (w / nrparticles));
  return Spectrum(0.);
}

// e>0, l>0: join eye and light sub-path at intermediate nodes
const Spectrum BiDirTracer::join_intermediate(const int e, const int l)
{
  if (eyescatnodes[e] == &eyebkgnode ||
  lightscatnodes[l] == &lightbkgnode ||
  !visible(eyescatnodes[e], lightscatnodes[l]))
 return Spectrum(0.);

  class SurfaceNode *en = &eyesurfnodes[e];  // eye sub-path end
  class SurfaceNode *ln = &lightsurfnodes[l];  // light sub-path end
  double dist;  //dist. and dir. between en/ln
  class Vec3 ltoedir = (en->position - ln->position).normalized(&dist);
  float ecos, lcos, espdf, lspdf, edpdf, ldpdf;// cos., surv,pdf, dir.sel.pdf
  class Spectrum efr, lfr; // BSDF at eye/light node
  en->eval(en->indir, -ltoedir, &efr, &espdf, &edpdf, &ecos);
  ln->eval(ln->indir, ltoedir, &lfr, &lspdf, &ldpdf, &lcos);
  float invdist2 = 1. / (dist * dist); // inverse square distance
  float G = ecos * lcos * invdist2; // geometric factor
  float etolpdf = espdf * edpdf * lcos * invdist2; // transition pdf en->ln
  float ltoepdf = lspdf * ldpdf * ecos * invdist2; // transition pdf ln->en

  float w = weight(e, l, ltoedir, ltoepdf, etolpdf);
  return en->value * efr * lfr * ln->value * (G / (en->pdf * ln->pdf) * w);
}

// joins the eye sub-path vertex of depth e with light sub-path
// vertex of depth l. e or l equal to -1 means empty sub-path.
const Spectrum BiDirTracer::joinat(const int e, const int l)
{
  class Spectrum score(0.);
  if (e==0 && l==0)
  score = join_light_eye();
  else if (e<=0 && l<=0)  // eye point on light or light node on camera
  score = Spectrum(0.);  // or both sub-paths empty: can’t happen
  else if (e==-1)
  score = lightpath_on_camera(l);
  else if (l==-1)
  score = eyepath_on_light(e);
  else if (e==0)
  score = join_lightpath_with_eye(l);
  else if (l==0)
  score = join_eyepath_with_light(e);
  else
  score = join_intermediate(e, l);
  return score;
}

const Spectrum BiDirTracer::join(void)
{
  // pre-calculate probabilities of sampling the eye and light path
  // in reverse direction.
  compute_reverse_eyepath_probs();
  compute_reverse_lightpath_probs();

  class Spectrum score(0.);

  // t is total combined path length: length of the eye sub-path +
// length of the light sub-path + 1 for the connecting segment.
// A length of ‘-1’ indicates an empty path (no nodes)
for (int t=1; t<=eyepathlen+lightpathlen+1
  && t<=max_combined_path_length; t++)
  {for (int e=-1; e<=eyepathlen; e++) {// e is eye sub-path length
  int l=t-e-1;  // l is light sub-path length
  if (l>=-1 && l<=lightpathlen)
  score += joinat(e, l);
 }
}

  return score;
}

const Spectrum BiDirTracer::tracepixel(const int i, const int j)
{
  // trace eye path
  eyenode = EyeNode(i,j);
  eyepathlen = trace_eye_path(&eyenode);

  // sample light sources and trace light path
  lightnode = 0;
  while (!lightnode) lightnode = lightsampler->sample(&senode, &benode);
  lightpathlen = trace_light_path(lightnode);

  // join eye and light paths
  return join();
}
 
// pre-calculates constants and allocates memory for arrays needed
// for rendering a frame.
void BiDirTracer::setup(int orgi, int orgj, int di, int dj)
{
  PathNode::min_light_path_depth = min_light_path_length;
  PathNode::max_light_path_depth = max_light_path_length;
  PathNode::min_eye_path_depth = min_eye_path_length;
  PathNode::max_eye_path_depth = max_eye_path_length;

  eyesurfnodes = new class SurfaceNode [PathNode::max_eye_path_depth+1];
  lightsurfnodes = new class SurfaceNode [PathNode::max_light_path_depth+1];
  eyescatnodes = new class ScatteringNode* [PathNode::max_eye_path_depth+1];
  lightscatnodes = new class ScatteringNode* [PathNode::max_light_path_depth+1];
  lightpath = new class PathNode* [PathNode::max_light_path_depth+1];
  eyepath = new class PathNode* [PathNode::max_eye_path_depth+1];
  erdpdf = new float [PathNode::max_eye_path_depth+1];
  lrdpdf = new float [PathNode::max_light_path_depth+1];
  erspdf = new float [PathNode::max_eye_path_depth+1];
  lrspdf = new float [PathNode::max_light_path_depth+1];
  erhpdf = new float [PathNode::max_eye_path_depth+1];
  lrhpdf = new float [PathNode::max_light_path_depth+1];

  int npixx = scrn->width;
  int npixy = scrn->height;
  nrparticles = npixx * npixy;
}

// undoes the effects of setup(). void BiDirTracer::cleanup(void) {
  delete [] eyesurfnodes;
  delete [] lightsurfnodes;
  delete [] eyescatnodes;
  delete [] lightscatnodes;
  delete [] lightpath;
  delete [] eyepath;
  delete [] erdpdf;
  delete [] lrdpdf;
  delete [] erspdf;
  delete [] lrspdf;
  delete [] erhpdf;
  delete [] lrhpdf;
}

void BiDirTracer::trace(void)
{
  setup();
  for (int j=0; j<scrn->height; j++) {
  for (int i=0; i<scrn->width; i++) {
  scrn->addPixel(i, j, tracepixel(i, j));
 }
 }
  cleanup();
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset