40 2.Modeling,Lighti ng,andRenderingTechniquesforVolumetricClouds
Unfortunately, adding high-frequency noise to the cloud impacts perform-
ance. We restrict ourselves to two octaves of noise because filtered 3D texture
lookups are expensive, and high frequency noise requires us to increase our
sampling rate to at least double the noise frequency to avoid artifacts. This may
be mitigated by fading out the higher octaves of noise with distance from the
camera and adaptively sampling the volume such that only the closer samples are
performed more frequently. The expense of adding procedural details to ray-
casted clouds is the primary drawback of this technique compared to splatting,
where the noise is just part of the 2D billboards used to represent each voxel.
Intersections between the scene’s geometry and the cloud volume need to be
handled explicitly with GPU ray-casting; you cannot rely on the depth buffer,
because the only geometry you are rendering for the clouds is its bounding box.
While this problem is best just avoided whenever possible, it may be handled by
rendering the depth buffer to a texture, and reading from this texture as each
sample along the viewing ray is computed. If the projected sample is behind the
depth buffer’s value, it is discarded.
Viewpoints inside the volume also need special attention. The intersection
code in Listing 2.3 handles this case properly, but you will need to detect when
the camera is inside the volume, and flip the faces of the bounding box being
rendered to represent it.
Three-dimensional textures consume a lot of memory on the graphics card. A
256 256 32 array of voxels each represented by a single byte consumes two
megabytes of memory. While VRAM consumption was a show-stopper on early
3D cards, it’s easily handled on modern hardware. However, addressing that
much memory at once can still be slow. Swizzling the volume texture by
breaking it up into adjacent, smaller bricks can help with cache locality as the
volume is sampled, making texture lookups faster.
Fog and blending of the cloud volume is omitted in Listing 2.4 but should be
handled for more realistic results.
HybridApproaches
Although GPU ray casting of clouds can be performant on modern graphics
hardware, the per-fragment lighting calculations are still expensive. A large
number of computations and texture lookups may be avoided by actually
performing the lighting on the CPU and storing the results in the colors of the
voxels themselves to be used at rendering time. Recomputing the lighting in this
manner results in a pause in framerate whenever lighting conditions change but
makes rendering the cloud extremely fast under static lighting conditions. By