Chapter 10. Fragment Shaders

In Chapter 9, “Texturing,” we introduced you to the basics of creating and applying textures in the fragment shader. In this chapter, we provide more details on the fragment shader and describe some of its uses. In particular, we focus on how to implement fixed function techniques using the fragment shader. The topics we cover in this chapter include:

  • Fixed function fragment shaders.

  • Fragment shader overview.

  • Multitexturing.

  • Fog.

  • Alpha test.

  • User clip planes.

Returning to our pipeline diagram in Figure 10-1, we have covered the vertex shader, primitive assembly, and rasterization stages of the pipeline. We have talked about using textures in the fragment shader. Now, we focus on the fragment shader portion of the pipeline and fill in the remaining details on writing fragment shaders.

OpenGL ES 2.0 Programmable Pipeline

Figure 10-1. OpenGL ES 2.0 Programmable Pipeline

Fixed Function Fragment Shaders

For readers who are new to the programmable fragment pipeline but have worked with a previous version of OpenGL ES (or desktop OpenGL), you are probably familiar with the fixed function fragment pipeline. Before diving into details of the fragment shader, we think it is worthwhile to briefly review the old fixed function fragment pipeline. This will give you an understanding of how the old fixed function pipeline maps into fragment shaders. It’s a good way to start before moving into more advanced fragment programming techniques.

In OpenGL ES 1.1 (and fixed function desktop OpenGL), you had a very limited set of equations that could be used to determine how to combine the various inputs to the fragment shader. In the fixed function pipeline, you essentially had three inputs you could use: the interpolated vertex color, the texture color, and the constant color. The vertex color would typically hold either a precomputed color or the result of the vertex lighting computation. The texture color came from fetching from whichever texture was bound using the primitive’s texture coordinates and the constant color could be set for each texture unit.

The set of equations you could use to combine these inputs together was quite limited. For example, in OpenGL ES 1.1 the equations listed in Table 10-1 were available.

Table 10-1. OpenGL ES 1.1 RGB Combine Functions

RGB Combine Function

Equation

REPLACE

A

MODULATE

A × B

ADD

A + B

ADD_SIGNED

A + B – 0.5

INTERPOLATE

A × C + B × (1 – C)

SUBTRACT

A – B

DOT3_RGB (and DOT3_RGBA)

4 × ((A.r – 0.5) × (B.r – 0.5) + (A.g – 0.5) ×

 

(B.g – 0.5) + (A.b – 0.5) × (B.b – 0.5))

The inputs A, B, and C to these equations could come from the vertex color, texture color, or constant color. There actually was a great number of interesting effects one could achieve, even with this limited set of equations. However, this was far from programmable, as the fragment pipeline could only be configured in a very fixed set of ways.

So why are we reviewing this here? It helps give an understanding of how traditional fixed function techniques can be achieved with shaders. For example, let’s say we had configured the fixed function pipeline with a single base texture map that we wanted to modulate by the vertex color. In fixed function OpenGL ES (or OpenGL), we would enable a single texture unit, choose a combine equation of MODULATE, and set up the inputs to the equation to come from the vertex color and texture color. The code to do this in OpenGL ES 1.1 is provided here for reference.

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_ALPHA, GL_PRIMARY_COLOR);
glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_ALPHA, GL_TEXTURE);

This code configures the fixed function pipeline to perform a modulate (A × B) between the primary color (the vertex color) and the texture color. If this code doesn’t make sense to you, don’t worry, as none of this exists in OpenGL ES 2.0. The point we are trying to make here is to show how this would map to a fragment shader. In a fragment shader, this same computation could be accomplished as follows.

precision mediump float;
uniform sampler2D s_tex0;
varying vec2 v_texCoord;
varying vec4 v_primaryColor;
void main()
{
   gl_FragColor = texture2D(s_tex0, v_texCoord) * v_primaryColor;
}

The fragment shader performs the exact same operations that would be performed by the fixed function setup. The texture value is fetched from a sampler (that is bound to texture unit 0) and a 2D texture coordinate is used to look up that value. Then, the result of that texture fetch is multiplied by v_primaryColor, a varying value that is passed in from the vertex shader. In this case, the vertex shader would have passed the color through to the fragment shader.

It is possible to write a fragment shader that would perform the equivalent computation as any possible fixed function texture combine setup. It is also, of course, possible to write shaders with much more complex and varied computations than just fixed function would allow. However, the point of this section was just to drive home the point of how we have transitioned from fixed function to programmable shaders. Now, we begin to look at some specifics of fragment shaders.

Fragment Shader Overview

The fragment shader provides a general-purpose programmable method for operating on fragments. The inputs to the fragment shader consist of the following:

  • Varyings—Interpolated data produced by the vertex shader.

  • Uniforms—State used by the fragment shader.

  • Textures—Texture images accessed through samplers.

  • Code—Fragment shader source or binary that describes the operations that will be performed on the fragment.

The output of the fragment shader is the fragment color that gets passed on to the per-fragment operations portion of the pipeline. The inputs and outputs to the fragment shader are illustrated in Figure 10-2.

OpenGL ES 2.0 Fragment Shader

Figure 10-2. OpenGL ES 2.0 Fragment Shader

Built-In Special Variables

OpenGL ES 2.0 has built-in special variables that are output by the fragment shader or are input to the fragment shader. The built-in special variables available to the fragment shader are the following:

  • gl_FragColorgl_FragColor is used to output the fragment color from the shader. This color is then passed in to the per-fragment operations in the pipeline. If a fragment shader does not write to gl_FragColor its value is undefined. Note that it is potentially valid to not write to gl_FragColor in the shader. If, for example, you wish to render only to depth, you can turn off writes to the color buffer using glColorMask. Then it is perfectly valid to skip writing the fragment color in the shader.

  • gl_FragCoordgl_FragCoord is a read-only variable that is available in the fragment shader. This variable holds the window relative coordinates (x, y, z, 1/w) of the fragment. There are a number of algorithms where it can be useful to know the window coordinates of the current fragment. For example, you can use the window coordinates as offsets into a texture fetch into a random noise map whose value is used to rotate a filter kernel on a shadow map. This is a technique that is used to reduce shadow map aliasing.

  • gl_FrontFacinggl_FrontFacing is a read-only variable that is available in the fragment shader. This variable is a boolean with a value of true if the fragment is part of a front-facing primitive and false otherwise.

  • gl_PointCoordgl_PointCoord is a read-only variable that can be used when rendering point sprites. It holds the texture coordinate for the point sprite that is automatically generated in the [0, 1] range during point rasterization. In Chapter 13, “Advanced Programming with OpenGL ES 2.0,” there is an example of rendering point sprites that uses this variable.

Built-In Constants

The following built-in constants are also relevant to the fragment shader.

const mediump int gl_MaxTextureImageUnits = 8;
const mediump int gl_MaxFragmentUniformVectors = 16;
const mediump int gl_MaxDrawBuffers = 1;

The built-in constants describe the following maximum terms:

  • gl_MaxTextureImageUnitsThis is the maximum number of texture image units that are available. The minimum value supported by all ES 2.0 implementations is eight.

  • gl_MaxFragmentUniformVectorsThis is the maximum number of vec4 uniform entries that can be used inside a fragment shader. The minimum value supported by all ES 2.0 implementations is 16 vec4 entries. The number of vec4 uniform entries that can actually be used by a developer can vary from implementation to implementation and from one fragment shader to another. This issue is described in Chapter 8, “Vertex Shaders,” and the same issue applies to fragment shaders.

  • gl_MaxDrawBuffersThis is the maximum number of draw buffers available. The minimum value supported by all ES 2.0 implementations is 1. If more than 1 were supported, this would mean an implementation supported multiple-render targets (MRTs). As it turns out, support for MRTs is not provided in ES 2.0 and this variable was just left around for future expansion.

The values specified for each built-in constant are the minimum values that must be supported by all OpenGL ES 2.0 implementations. It is possible that implementations may support values greater than the minimum values described. The actual supported values can be queried using the following code.

GLint   maxTextureImageUnits, maxFragmentUniformVectors;

glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxTextureImageUnits);
glGetIntegerv(GL_MAX_FRAGMENT_UNIFORM_VECTORS,
              &maxFragmentUniformVectors);

Precision Qualifiers

Precision qualifiers were covered in detail in Chapter 8 on vertex shaders and also introduced in Chapter 5, “OpenGL ES Shading Language.” Please review those sections for full details on precision qualifiers. We remind you here that there is no default precision for fragment shaders. This means that every fragment shader must declare a default precision (or provide precision qualifiers for all variable declarations).

OpenGL ES 2.0 mandates that implementations support at least medium precision in the fragment shader, but does not require support for high precision. The way to determine whether high precision is supported in the fragment shader is determining whether the GL_FRAGMENT_PRECISION_HIGH preprocessor macro is defined (in addition, the implementation will export the GL_OES_fragment_precision_high extension string).

ES 2.0 Fragment Shader Limitations

In Chapter 8, we provided a detailed review of the limitations on vertex shaders and how to write portable shaders. If you need a refresher, review that material, as almost all of the same limitations apply to fragment shaders. The only difference in limitations for fragment shaders is that uniform arrays can only be indexed with constant integral expressions. In the vertex shader, it is required that all implementations support indexing of uniform arrays using computed expressions. However, this is not the case for the fragment shader. Indexing of uniforms using anything other than constant integral expressions is not guaranteed to be supported by an ES 2.0 implementation.

Implementing Fixed Function Techniques Using Shaders

Now that we have given an overview of fragment shaders, we are going to show you how to implement several fixed function techniques using shaders. The fixed-function pipeline in OpenGL ES 1.x and desktop OpenGL provided APIs to perform multitexturing, fog, alpha test, and user clip planes. Although none of these techniques is provided explicitly in OpenGL ES 2.0, all of them are still possible to implement using shaders. This section reviews each of these fixed function processes and provides example fragment shaders that demonstrate each technique.

Multitexturing

We start with multitexturing, which is a very common operation in fragment shaders used for combining multiple texture maps. For example, a technique that has been used in many games such as Quake III is to store precomputed lighting from radiosity calculations in a texture map. That map is then combined with the base texture map in the fragment shader to represent static lighting. There are many other examples of using multiple textures, some of which we cover in Chapter 13. For example, often a texture map is used to store a specular exponent and mask to attenuate and mask specular lighting contributions. Many games also use normal maps, which are textures that store normal information at a higher level of detail than per-vertex normals so that lighting can be computed in the fragment shader.

The point of mentioning all this here is that you now have learned about all of the parts of the API that are needed to accomplish multitexturing techniques. In Chapter 9, you learned how to load textures on various texture units and fetch from them in the fragment shader. Combining the textures in various ways in the fragment shader is simply a matter of employing the many operators and built-in functions that exist in the shading language. Using these techniques, you can easily achieve all of the effects that were made possible with the fixed function fragment pipeline in previous versions of OpenGL ES.

An example of using multiple textures is provided in the Chapter_10/ MultiTexture example, which renders the image in Figure 10-3.

Multitextured Quad

Figure 10-3. Multitextured Quad

This example loads a base texture map and light map texture and combines them in the fragment shader on a single quad. The fragment shader for the sample program is provided in Example 10-1.

Example 10-1. Multitexture Fragment Shader

precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_baseMap;
uniform sampler2D s_lightMap;
void main()
{
   vec4 baseColor;
   vec4 lightColor;

   baseColor = texture2D(s_baseMap, v_texCoord);
   lightColor = texture2D(s_lightMap, v_texCoord);
   gl_FragColor = baseColor * (lightColor + 0.25);
}

The fragment shader has two samplers, one for each of the textures. The relevant code for setting up the texture units and samplers is next.

// Bind the base map
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, userData->baseMapTexId);

// Set the base map sampler to texture unit 0
glUniform1i(userData->baseMapLoc, 0);
// Bind the light map
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, userData->lightMapTexId);

// Set the light map sampler to texture unit 1
glUniform1i(userData->lightMapLoc, 1);

As you can see, this code binds each of the individual texture objects to textures units 0 and 1. The samplers are set with values to bind the samplers to the respective texture units. In this example, a single texture coordinate is used to fetch from both of the maps. In typical light mapping, there would be a separate set of texture coordinates for the base map and light map. The light maps are typically paged into a single large texture and the texture coordinates can be generated using offline tools.

Fog

A common technique that is used in rendering 3D scenes is the application of fog. In OpenGL ES 1.1 (and desktop OpenGL), fog was provided as a fixed function operation. One of the reasons fog is such a prevalent technique is that it can be used to reduce draw distances and remove “popping” of geometry as it comes in closer to the viewer.

There are a number of possible ways to compute fog and with programmable fragment shaders you are not limited to any particular equation. Here we show you how you would go about computing linear fog with a fragment shader. To compute any type of fog, there are two inputs we will need: the distance of the pixel to the eye and the color of the fog. To compute linear fog, we also need the minimum and maximum distance range that the fog should cover.

The equation for the linear fog factor

Fog

computes a fog factor to multiply the fog color by. This color gets clamped in the [0.0, 1.0] range and then is linear interpolated with the overall color of a fragment to compute the final color. The distance to the eye is best computed in the vertex shader and interpolated across the primitive using a varying.

A RenderMonkey workspace is provided as an example in the Chapter_10/ RM_LinearFog folder that demonstrates the fog computation. A screenshot of the workspace is provided in Figure 10-4.

Linear Fog on Terrain in RenderMonkey

Figure 10-4. Linear Fog on Terrain in RenderMonkey

The code for the vertex shader that computes the distance to the eye is provided in Example 10-2.

Example 10-2. Vertex Shader for Computing Distance to Eye

uniform mat4 matViewProjection;
uniform mat4 matView;
uniform vec4 u_eyePos;

attribute vec4 rm_Vertex;
attribute vec2 rm_TexCoord0;

varying vec2 v_texCoord;
varying float v_eyeDist;

void main(void)
{
    // Transform vertex to view-space
    vec4 vViewPos = matView * rm_Vertex;

    // Compute the distance to eye
    v_eyeDist = sqrt((vViewPos.x - u_eyePos.x) *
                     (vViewPos.x - u_eyePos.x) +
                     (vViewPos.y - u_eyePos.y) *
                     (vViewPos.y - u_eyePos.y) +
                     (vViewPos.z - u_eyePos.z) *
                     (vViewPos.z - u_eyePos.z) );

    gl_Position = matViewProjection * rm_Vertex;
    v_texCoord  = rm_TexCoord0.xy;
}

The important part of this vertex shader is the computation of the v_eyeDist varying variable. First, the input vertex is transformed into view space using the view matrix and stored in vViewPos. Then, the distance from this point to the u_eyePos uniform variable is computed. This computation gives us the distance in eye space from the viewer to the transformed vertex. We can use this value in the fragment shader to compute the fog factor as provided in Example 10-3.

Example 10-3. Fragment Shader for Rendering Linear Fog

precision mediump float;

uniform vec4 u_fogColor;
uniform float u_fogMaxDist;
uniform float u_fogMinDist;
uniform sampler2D baseMap;

varying vec2 v_texCoord;
varying float v_eyeDist;

float computeLinearFogFactor()
{
   float factor;

   // Compute linear fog equation
   factor = (u_fogMaxDist - v_eyeDist) /
            (u_fogMaxDist - u_fogMinDist);

   // Clamp in the [0,1] range
   factor = clamp(factor, 0.0, 1.0);

   return factor;
}

void main(void)
{
    float fogFactor = computeLinearFogFactor();
    vec4  fogColor  = fogFactor * u_fogColor;
    vec4 baseColor = texture2D( baseMap, v_texCoord );
    // Compute final color as a lerp with fog factor
    gl_FragColor = baseColor * fogFactor +
                   fogColor  * (1.0 - fogFactor);
}

In the fragment shader, the computeLinearFogFactor() function performs the computation for the linear fog equation. The minimum and maximum fog distances are stored in uniform variables and the interpolated eye distance that was computed in the vertex shader is used to compute the fog factor. The fog factor is then used to perform a linear interpolation between the base texture color and the fog color. The result is that we now have linear fog and can easily adjust the distances and colors by changing the uniform values.

Note that with the flexibility of programmable fragment shaders, it is very easy to implement other methods to compute fog. For example, one could easily compute exponential fog by simply changing the fog equation. Further, rather than computing fog based on distance to the eye, you could compute fog based on distance to the ground to have ground-based fog. There are a number of possible fog effects that can be easily achieved with small modifications to the fog computations provided here.

Alpha Test (Using Discard)

A very common effect used in 3D applications is to draw primitives that are fully transparent in certain fragments. This is very useful for rendering something like a chain-link fence. Representing a fence using geometry would require a significant amount of primitives. However, an alternative to using geometry is to store a mask value in a texture that specifies which texels should be transparent. For example, one could store the chain-link fence in a single RGBA texture where the RGB values represent the color of the fence and the A value represents the mask of whether the texture is transparent. Then one can easily render a fence using just one or two triangles and masking off pixels in the fragment shader.

In traditional fixed function rendering, this effect was achieved using the alpha test. The alpha test allowed you to specify a comparison test whereby if an alpha value of a fragment compared against a reference value failed, that fragment would be killed. That is, if a fragment failed the alpha test, the fragment would not be rendered. In OpenGL ES 2.0, there is no fixed function alpha test, but the same effect can be achieved in the fragment shader using the discard keyword.

The RenderMonkey example in Chapter_10/RM_AlphaTest shows a very simple example of doing the alpha test in the fragment shader as shown in Figure 10-5.

Alpha Test Using Discard

Figure 10-5. Alpha Test Using Discard

The fragment shader code for this example is provided in Example 10-4.

Example 10-4. Fragment Shader for Alpha Test Using Discard

precision mediump float;

uniform sampler2D baseMap;

varying vec2 v_texCoord;

void main(void)
{
   vec4 baseColor = texture2D(baseMap, v_texCoord);

   if(baseColor.a < 0.25)
   {
      discard;
   }
   else
   {
      gl_FragColor = baseColor;
   }
}

In this fragment shader, the texture is a four-channel RGBA texture. The alpha channel is used for the alpha test. The alpha color is compared against 0.25 and if it less than that, the fragment is killed using discard. Otherwise, the fragment is drawn using the texture color. This technique can be used for implementing the alpha test by simply changing the comparison or alpha reference value.

User Clip Planes

As described in Chapter 7, “Primitive Assembly and Rasterization,” all primitives are clipped against the six planes that make up the view frustum. However, sometimes a user might want to clip against one or more additional user clip planes. There are a number of possible reasons you might want to clip against user clip planes. For example, when rendering reflections, you need to flip the geometry about the reflection plane and then render it into an off-screen texture. When rendering into the texture, you need to clip the geometry against the reflection plane, which requires a user clip plane.

In OpenGL ES 1.1, user clip planes could be provided to the API via a plane equation and the clipping would be handheld automatically. In OpenGL ES 2.0, you can still accomplish this same effect, but now you need to do it yourself in the shader. The key to implementing user clip planes is using the discard keyword that was introduced in the previous section.

Before showing you how to implement user clip planes, let’s review the basics of the mathematics. A plane is specified by the equation

Ax + By + Cz + D = 0

The vector (A, B, C) represents the normal of the plane and the value D is the distance of the plane along that vector from the origin. To figure out whether a point should or should not be clipped against a plane, we need to evaluate the distance from a point P to a plane with the equation:

Dist = (A × P.x) + (B × P.y) + (C × P.z) + D

If the distance is less than 0, we know the point is behind the plane and should be clipped and if the distance is greater than or equal to 0, it should not be clipped. Note that the plane equation and P must be in the same coordinate space. A RenderMonkey example is provided in Chapter_10/ RM_ClipPlane workspace as shown in Figure 10-6.

User Clip Plane Example

Figure 10-6. User Clip Plane Example

The first thing the shader needs to do is compute the distance to the plane as mentioned earlier. This could be done in either the vertex shader (and passed into a varying) or the fragment shader. It is cheaper to do this computation in the vertex shader rather than having to compute the distance in every fragment. The vertex shader listing in Example 10-5 shows the distance to plane computation.

Example 10-5. User Clip Plane Vertex Shader

uniform vec4 u_clipPlane;
uniform mat4 matViewProjection;
attribute vec4 rm_Vertex;

varying float u_clipDist;

void main(void)
{
   // Compute the distance between the vertex and the clip plane
   u_clipDist = dot(rm_Vertex.xyz, u_clipPlane.xyz) +
                u_clipPlane.w;
   gl_Position = matViewProjection * rm_Vertex;
}

The u_clipPlane uniform variable holds the plane equation for the clip plane. The u_clipDist varying variable then stores the computed clip distance. This value is passed into the fragment shader, which uses the interpolated distance to determine whether the fragment should be clipped as shown in Example 10-6.

Example 10-6. User Clip Plane Fragment Shader

precision mediump float;
varying float u_clipDist;
void main(void)
{
    // Reject fragments behind the clip plane
    if(u_clipDist < 0.0)
       discard;

    gl_FragColor = vec4(0.5, 0.5, 1.0, 0.0);
}

As you can see, if the u_clipDist varying variable is negative, this means the fragment is behind the clip plane and must be discarded. Otherwise, the fragment is processed as usual. This simple example just demonstrates the computations needed to implement user clip planes. You can easily implement multiple user clip planes by simply computing multiple clip distances and having multiple discard tests.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset