Building a deferred rendering pipeline

Although modern graphics cards are able to push millions of polygons per frame, their abilities in terms of lighting are quite limited when using the traditional, forward rendering approach, where all permutations of lights on scene objects have to be calculated to get the final scene lighting. Some engines circumvent this issue by limiting the number of lights that are allowed to affect the scene, by choosing the ones that are nearest.

But what if we wanted hundreds of lights in a scene? How would we realize that? In this recipe we will see how to solve this problem by building a deferred rendering pipeline that is limited by the number of pixels our hardware is able to push, but not the number of lights in our scene.

Getting ready

Create your project folders according to Setting up the game structure, add a directory called shaders and make sure it is in the engine's search path. If that is done, you're ready to go on.

How to do it...

Complete the following tasks to get your deferred rendering pipeline going:

  1. Open Application.py and paste the following code below:
    from direct.showbase.ShowBase import ShowBase
    from direct.actor.Actor import Actor
    from panda3d.core import *
    from direct.filter.FilterManager import *
    import random
    loadPrcFileData('', 'show-buffers 1')
    class Application(ShowBase):
    def __init__(self):
    ShowBase.__init__(self)
    self.setupScene()
    self.setupLight()
    self.setupCams()
    self.setupPostFx()
    def setupScene(self):
    self.scene = render.attachNewNode("scene")
    self.panda = Actor("panda", {"walk": "panda-walk"})
    self.panda.reparentTo(self.scene)
    self.panda.loop("walk")
    self.world = loader.loadModel("environment")
    self.world.reparentTo(self.scene)
    self.world.setScale(0.5)
    self.world.setPos(-8, 80, 0)
    self.scene.setShaderAuto()
    
  2. Add the following method to the Application class:
    def setupCams(self):
    self.lightCam = self.makeCamera(self.win)
    self.lightCam.reparentTo(self.cam)
    sceneMask = BitMask32(1)
    lightMask = BitMask32(2)
    self.cam.node().setCameraMask(sceneMask)
    self.lightCam.node().setCameraMask(lightMask)
    self.lights.hide(sceneMask)
    self.ambient.hide(sceneMask)
    self.scene.hide(lightMask)
    self.cam.node().getDisplayRegion(0).setSort(1)
    self.lightCam.node().getDisplayRegion(0).setSort(2)
    self.win.setSort(3)
    self.lightCam.node().getDisplayRegion(0).setClearColor(Vec4(0, 0, 0, 1))
    self.lightCam.node().getDisplayRegion(0).setClearColorActive(1)
    self.cam.setPos(0, -40, 6)
    
  3. Add another method to the class Application:
    def setupLight(self):
    self.lights = render.attachNewNode("lights")
    self.sphere = loader.loadModel("misc/sphere")
    for i in range(400):
    light = self.lights.attachNewNode("light")
    light.setPos(random.uniform(-15, 15), random.uniform(-5, 50), random.uniform(0, 15))
    light.setColor(random.random(), random.random(), random.random())
    light.setScale(5)
    self.sphere.instanceTo(light)
    vlight = self.scene.attachNewNode("vlight")
    vlight.setPos(light.getPos())
    vlight.setColor(light.getColor())
    vlight.setScale(0.1)
    self.sphere.instanceTo(vlight)
    cm = CardMaker("ambient")
    cm.setFrame(-100, 100, -100, 100)
    self.ambient = render.attachNewNode("ambient")
    self.ambient.attachNewNode(cm.generate())
    self.ambient.setColor(0.1, 0.1, 0.1, 1)
    self.ambient.reparentTo(self.cam)
    self.ambient.setPos(0, 5, 0)
    
  4. The following setupPostFx() method is the last one you have to add to the Application class:
    def setupPostFx(self):
    self.gbufMan = FilterManager(self.win, self.cam)
    self.lightMan = FilterManager(self.win, self.lightCam)
    albedo = Texture()
    depth = Texture()
    normal = Texture()
    final = Texture()
    self.gbufMan.renderSceneInto(colortex = albedo, depthtex = depth, auxtex = normal, auxbits = AuxBitplaneAttrib.ABOAuxNormal)
    lightQuad = self.lightMan.renderSceneInto(colortex = final)
    lightQuad.setShader(loader.loadShader("pass.cg"))
    lightQuad.setShaderInput("color", final)
    self.ambient.setShader(loader.loadShader("ambient.cg"))
    self.ambient.setShaderInput("albedo", albedo)
    self.ambient.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OOne, ColorBlendAttrib.OOne))
    self.ambient.setAttrib(DepthWriteAttrib.make(DepthWriteAttrib.MOff))
    self.lights.setShader(loader.loadShader("light.cg"))
    self.lights.setShaderInput("albedo", albedo)
    self.lights.setShaderInput("depth", depth)
    self.lights.setShaderInput("normal", normal)
    self.lights.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd, ColorBlendAttrib.OOne, ColorBlendAttrib.OOne))
    self.lights.setAttrib(CullFaceAttrib.make(CullFaceAttrib.MCullCounterClockwise))
    self.lights.setAttrib(DepthWriteAttrib.make(DepthWriteAttrib.MOff))
    
  5. Go to the shaders subdirectory of the project and add 3 new files called ambient.cg, light.cg, and pass.cg.
  6. Open ambient.cg in an editor and add the following code:
    //Cg
    void vshader(float4 vtx_position : POSITION,
    out float4 l_position : POSITION,
    out float4 l_screenpos : TEXCOORD0,
    uniform float4x4 mat_modelproj)
    {
    l_position = mul(mat_modelproj, vtx_position);
    l_screenpos = l_position;
    }
    void fshader(float4 l_screenpos : TEXCOORD0,
    uniform sampler2D k_albedo : TEXUNIT0,
    uniform float4 texpad_albedo,
    uniform float4 attr_color,
    out float4 o_color : COLOR)
    {
    l_screenpos.xy /= l_screenpos.w;
    float2 texcoords = float2(l_screenpos.xy) * texpad_albedo.xy + texpad_albedo.xy;
    float4 albedo = tex2D(k_albedo, texcoords);
    o_color = albedo * attr_color;
    }
    
  7. Add the following shader code to light.cg:
    //Cg
    void vshader(float4 vtx_position : POSITION,
    out float4 l_position : POSITION,
    out float4 l_screenpos : TEXCOORD0,
    uniform float4x4 mat_modelproj)
    {
    l_position = mul(mat_modelproj, vtx_position);
    l_screenpos = l_position;
    }
    void fshader(float4 l_screenpos : TEXCOORD0,
    uniform sampler2D k_albedo : TEXUNIT0,
    uniform sampler2D k_depth : TEXUNIT1,
    uniform sampler2D k_normal : TEXUNIT2,
    uniform float4 texpad_albedo,
    uniform float4 attr_color,
    uniform float4 vspos_model,
    uniform float4x4 vstrans_clip,
    uniform float4 row0_model_to_view,
    out float4 o_color : COLOR)
    {
    l_screenpos.xy /= l_screenpos.w;
    float2 texcoords = float2(l_screenpos.xy) * texpad_albedo.xy + texpad_albedo.xy;
    float4 albedo = tex2D(k_albedo, texcoords);
    float4 normal = tex2D(k_normal, texcoords);
    float depth = tex2D(k_depth, texcoords);
    float4 vspos_scene;
    vspos_scene.xy = l_screenpos.xy;
    vspos_scene.z = depth;
    vspos_scene.w = 1;
    vspos_scene = mul(vstrans_clip, vspos_scene);
    vspos_scene /= vspos_scene.w * 2;
    float3 vec = float3(vspos_model) - vspos_scene;
    float len = length(vec);
    float3 dir = vec / len;
    float atten = saturate(1.0 - (len / row0_model_to_view.x));
    float intensity = pow(atten, 2) * dot(dir, float3(normal));
    o_color = float4(albedo.xyz * attr_color.xyz * intensity, 1);
    }
    
  8. Open and edit pass.cg so it contains this piece of code:
    //Cg
    void vshader(float4 vtx_position : POSITION,
    out float4 l_position : POSITION,
    out float2 l_texcoord : TEXCOORD0,
    uniform float4 texpad_color,
    uniform float4x4 mat_modelproj)
    {
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord = (vtx_position.xz * texpad_color.xy) + texpad_color.xy;
    }
    void fshader(float2 l_texcoord : TEXCOORD0,
    uniform sampler2D k_color : TEXUNIT0,
    out float4 o_color : COLOR0)
    {
    o_color = tex2D(k_color, l_texcoord);
    }
    
  9. Press the F6 key to launch the program you just created:
How to do it...

How it works...

The basic idea behind deferred rendering is very simple. The unlit scene, its normals, and the depth buffer are stored into textures in the first step of the technique. Then, the bounding volume of each light is rendered using a special shader that samples color, depth, and normal data at the current pixel and projects the screen position back into the scene to get the position relative to itself. Depending on this distance and the normal at that position, the pixel is lit or not.

This technique has the advantage that its performance only depends on how many pixels in the scene are actually lit. The downsides are that it consumes a lot of video memory and that it binds application performance to the graphics processor.

After this high level view on the topic, let's take a closer look at the parts this code sample is made of!

After filling our scene with the panda and the jungle background and instructing the engine to show the content of our buffers with the line loadPrcFileData('', 'show-buffers 1'), we go on to set up the lights and cameras.

In the setupLight() method, we create a new node that will be the parent for all the point lights in our scene, before the four hundred light volumes and the tiny dots that visualize the lights' center points are added to the scene. We also create an ambient light, which does not have a real light volume. In fact, it has an infinitely big one, but as this wouldn't be practical to implement, it is represented by a huge quad put in front of the camera.

Our camera setup is quite elaborate for this sample but unfortunately, it is necessary. We add a new camera and reparent it to the default camera so it always sees the same scene. Then we create a bit mask for each camera, which we use to hide the point and ambient lights from the default camera. The lightCam will in turn only record objects that act as light volumes.

In the following lines, we define the order in which the cameras will record the scene. This is very important, because the unlit scene has to be rendered before the lights are composited into the image. We also set the clear color of the lightCam to black, so unlit parts of the scene are rendered in a dark color.

This leaves us with the buffer and shader setup in the setupPostFx() method.

We are using two instances of FilterManager, each one attached to one of our cameras. The gbufMan instance is attached to the main camera to record scene color, normals, and depth, the so-called geometry buffer or short—G-buffer. With lightMan, we are recording the final image composition, which we will then render onto lightQuad, using a pass-through shader, to present the scene on the screen.

The lights will blend additively to make the scene appear brighter if more lights affect one spot and will not write to the depth buffer. After we're done with these render states, we can take a look at what's going on inside the light shaders.

The ambient light shader is really simple. It reconstructs the proper texture coordinates from the current screen coordinates to sample the albedo texture and multiply with the ambient quad's color.

Looking deeper into the inner workings of light.cg, the situation isn't so trivial anymore. First, we must find the current pixel position on the screen to determine the proper texture coordinates for the color, normal, and depth textures. Then, the view space position of the pixel is restored from clip space using the matrix vstrans_clip that is provided by Panda3D.

After the view space position of the current pixel is restored in vspos_scene, we can calculate the distance to the light that is currently rendered by subtracting it from vspos_model, which holds the view space position of the currently rendering model.

Using the distance from the current light's center and the direction to the pixel that is in question, we can calculate if that point actually is within the boundaries of the light volume. This is done by dividing len by row0_model_to_view.x and subtracting from one to compute the amount of distance-based attenuation. The latter of the variables stores the light volume's scale factor, which at the same time is its radius.

Finally, we determine the pixel's intensity using the famous Lambertian term (the dot product of the surface normal and the light direction) and the amount of attenuation. This is multiplied with the vertex color attribute holding the light's color and the albedo color sampled from the unlit scene buffer texture.

There's more...

This is a very basic deferred rendering setup that only supports ambient and point lights. Building on this sample, try adding directional lights, specular highlights, and shadows. A set up like this one opens many possibilities to create interesting effects!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset