9

UNDERSTANDING OPENGL

Image

In this project, you’ll create a simple program that displays a texture-mapped square using OpenGL and GLFW. OpenGL adds a software interface to your graphics processing unit (GPU), and GLFW is a windowing toolkit. You’ll also learn how to use the C-like OpenGL Shading Language (GLSL) to write shaders—code that executes in the GPU. Shaders bring immense flexibility to computations in OpenGL. I’ll show you how to use GLSL shaders to transform and color geometry as you create a rotating, textured polygon (as shown in Figure 9-1).

GPUs are optimized to perform the same operations on huge amounts of data repeatedly, in parallel, which makes them much faster than central processing units (CPUs) for this purpose. In addition to rendering computer graphics, they’re also being used for general-purpose computing, and specialized languages now let you use your GPU hardware for this purpose. You’ll leverage the GPU, OpenGL, and shaders in this project.

Image

Figure 9-1: The final image for the project in this chapter: a rotating polygon with a star image. This square polygon boundary is clipped to a black circle using a shader.

Python is an excellent “glue” language. There are a vast number of Python bindings available for libraries written in other languages, such as C, that allow you to use these libraries in Python. In this chapter and in Chapters 10 and 11, you’ll use PyOpenGL, the Python binding to OpenGL, to create computer graphics.

OpenGL is a state machine, kind of like an electrical switch, with two states: ON and OFF. When you switch from one state to the other, the switch remains in that new state. However, OpenGL is more complex than a simple electrical switch; it’s more like a switchboard with numerous switches and dials. Once you change the state of a particular setting, it remains OFF unless you turn it ON. When you bind an OpenGL call to a particular object, subsequent related OpenGL calls will be directed toward the bound object until it is unbound.

Here are some of the concepts introduced in this project:

• Using the GLFW windowing library for OpenGL

• Using GLSL to write vertex and fragment shaders

• Performing texture mapping

• Using 3D transformations

First, let’s take a look at how OpenGL works.

Old-School OpenGL

In most computer graphics systems, drawing is done by sending vertices through a series of interconnected functional blocks that form a pipeline. Recently, the OpenGL application programming interface (API) transitioned from a fixed-function graphics pipeline to a programmable graphics pipeline. You’ll focus on modern OpenGL, but because you’ll find numerous “old-school” OpenGL examples on the Web, I’ll give you a taste of what the API used to look like so you’ll have a better sense of what has changed.

For example, the following simple old-school OpenGL program draws a yellow rectangle on the screen:

import sys
from OpenGL.GLUT import *
from OpenGL.GL import *

def display():
    glClear (GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
    glColor3f (1.0, 1.0, 0.0)
    glBegin(GL_QUADS)
    glVertex3f (-0.5, -0.5, 0.0)
    glVertex3f (0.5, -0.5, 0.0)
    glVertex3f (0.5, 0.5, 0.0)
    glVertex3f (-0.5, 0.5, 0.0)
    glEnd()
    glFlush();

glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)
glutInitWindowSize(400, 400)
glutCreateWindow("oldgl")
glutDisplayFunc(display)
glutMainLoop()

Figure 9-2 shows the result.

Image

Figure 9-2: Output from a simple old-school OpenGL program

Using old-school OpenGL, you would specify individual vertices for the 3D primitive (a GL_QUAD, or rectangle, in this case), but then each vertex needs to be sent to the GPU separately, which is inefficient. This old-school model of programming doesn’t scale well and is really slow when your geometry becomes complex. It also offers only limited control over how the vertices and pixels on the screen are transformed. (As you will see in this project, you can use the new programmable pipeline paradigm to overcome these limitations.)

Modern OpenGL: The 3D Graphics Pipeline

To give you a sense of how modern OpenGL works at a high level, let’s make a triangle appear on the screen through a sequence of operations commonly known as the 3D graphics pipeline. Figure 9-3 shows a simplified representation of the OpenGL 3D graphics pipeline.

Image

Figure 9-3: The (simplified) OpenGL graphics pipeline

In the first step, you define the 3D geometry by defining the vertices of the triangle in 3D space and specifying the colors associated with each vertex. Next, you transform these vertices: the first transformation places the vertices in 3D space, and the second projects the 3D coordinates onto 2D space. The color values for the corresponding vertices are also calculated in this step based on factors such as lighting, typically in code called the vertex shader.

Next, the geometry is rasterized (converted from geometric objects to pixels), and for each pixel, another block of code called the fragment shader is executed. Just as the vertex shader operates on 3D vertices, the fragment shader operates on the 2D pixels after rasterization.

Finally, the pixel passes through a series of frame buffer operations, where it undergoes depth buffer testing (checking whether one fragment obscures another), blending (mixing two fragments with transparency), and other operations that combine its current color with what is already on the frame buffer at that location. These changes end up on the final frame buffer, which is typically displayed on the screen.

Geometric Primitives

Because OpenGL is a low-level graphics library, you can’t ask it directly to draw a cube or a sphere, though libraries built on top of it can do such tasks for you. OpenGL understands only low-level geometric primitives such as points, lines, and triangles.

Modern OpenGL supports only the primitive types GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP, GL_TRIANGLES, GL_TRIANGLE_STRIP, and GL_TRIANGLE_FAN. Figure 9-4 shows how the vertices for the primitives are organized. Each vertex shown is a 3D coordinate such as (x, y, z).

Image

Figure 9-4: OpenGL primitives

To draw a sphere in OpenGL, first define the geometry of the sphere mathematically and compute its 3D vertices. Then, assemble the vertices into basic geometric primitives; for example, you could group each set of three vertices into a triangle. You can then render these vertices using OpenGL.

3D Transformations

You can’t learn computer graphics without learning about 3D transformations. Conceptually, these are quite simple to understand. You have an object—what can you do with it? You can move it, stretch (or squash) it, or rotate it. You can do other things to it too, but these three tasks are the operations or transformations most commonly performed on an object: translation, scale, and rotation. In addition to these commonly used transformations, you’ll use a perspective projection to map the 3D objects onto the 2D plane of the screen. These transformations are all applied on the coordinates of the object you are trying to transform.

While you’re probably familiar with 3D coordinates in the form (x, y, z), in 3D computer graphics you use coordinates in the form (x, y, z, w), called homogeneous coordinates. (These coordinates come from a branch of mathematics called projective geometry, which is beyond the scope of this book.)

Homogenous coordinates allow you to express these common 3D transformations as 4×4 matrices. But for purposes of these OpenGL projects, all you need to know is that the homogenous coordinate (x, y, z, w) is equivalent to the 3D coordinate (x/w, y/w, z/w, 1.0). A 3D point (1.0, 2.0, 3.0) can be expressed in homogeneous coordinates as (1.0, 2.0, 3.0, 1.0).

Here is an example of a 3D transformation using a translation matrix. See how the matrix multiplication translates a point (x, y, z, 1.0) to (x + tx, y + ty, z + tz, 1.0).

Image

Two terms that you will encounter often in OpenGL are modelview and projection transformations. With the advent of customizable shaders in modern OpenGL, modelviews and projections are just generic transformations. Historically, in old-school versions of OpenGL, the modelview transformations were applied to your 3D model to position it in space, and the projection transformations were used to map the 3D coordinates onto a 2D surface for display, as you’ll see in a moment. Modelview transformations are user-defined transformations that let you position your 3D objects, and projection transformations are projective transformations that map 3D onto 2D.

The two most commonly used 3D graphic projective transformations are orthographic and perspective, but here you’ll use only perspective projections, which are defined by a field of view (the extent to which the eye can see), a near plane (the plane closest to the eye), a far plane (the plane farthest from the eye), and an aspect ratio (the ratio of the width to the height of the near plane). Together, these parameters constitute a camera model for a projection that determines how the 3D figure will be mapped onto a 2D screen, as shown in Figure 9-5. The truncated pyramid shown in the figure is the view frustum. The eye is the 3D location where you place the camera. (For orthographic projection, the eye will be at infinity, and the pyramid will become a rectangular cuboid.)

Once the perspective projection is complete and before rasterization, the graphics primitives are clipped (or cut out) against the near and far planes, as shown in Figure 9-5. The near and far planes are chosen such that the 3D objects you want to appear onscreen lie inside the view frustum; otherwise, they will be clipped away.

Image

Figure 9-5: Perspective projection camera model

Shaders

You’ve seen how shaders fit into the modern OpenGL programmable graphics pipeline. Now let’s look at a simple pair of vertex and fragment shaders to get a sense of how GLSL works.

A Vertex Shader

Here is a simple vertex shader:

 #version 330 core

 in vec3 aVert;

 uniform mat4 uMVMatrix;
 uniform mat4 uPMatrix;

 out vec4 vCol;

   void main() {
        // apply transformations
       gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0);
        // set color
       vCol = vec4(1.0, 0.0, 0.0, 1.0);
   }

At , you set the version of GLSL used in the shader to version 3.3. Then, you define an input named aVert of type vec3 (a 3D vector) for the vertex shader using the keyword in . At and , you define two variables of type mat4 (4×4 matrices), which correspond to the modelview and projection matrices, respectively. The uniform prefix to these variables indicates that they do not change during execution of the vertex shader for a given rendering call on a set of vertices. You use the out prefix at to define the output of the vertex shader, which is a color variable of type vec4 (a 4D vector to store red, green, blue, and alpha channels).

Now you come to the main() function, where the vertex shader program starts. The value of gl_Position is computed at by transforming the input aVert using the uniform matrices passed in. The GLSL variable gl_Position is used to store the transformed vertices. At , you set the output color from the vertex shader to red with no transparency by using the value (1, 0, 0, 1). You’ll use this as input in the next shader in the pipeline.

A Fragment Shader

Now let’s look at a simple fragment shader:

 #version 330 core

 in vec4 vCol;

 out vec4 fragColor;

   void main() {
       // use vertex color
     fragColor = vCol;
   }

After setting the version of GLSL used in the shader at , you set vCol at as the input to the fragment shader. This variable, vCol, was set as output from the vertex shader. (Remember, the vertex shader executes for every vertex in the 3D scene, whereas the fragment shader executes for every pixel on the screen.)

During rasterization (which occurs between the vertex and fragment shaders), OpenGL converts the transformed vertices to pixels, and the color of the pixels lying between the vertices is calculated by interpolating the color values at the vertices.

You set up an output color variable fragColor at , and at , the interpolated color is set as the output. By default, and in most cases, the intended output of the fragment shader is the screen, and the color you set ends up there (unless it’s affected by operations such as depth testing that occur in the final stage of the graphics pipeline).

For the GPU to execute the shader code, it needs to be compiled and linked to instructions that the hardware understands. OpenGL provides ways to do this and reports detailed compiler and linker errors that will help you develop the shader code.

The compilation process also generates a table of locations or indices for the variables declared in your shaders that you’ll use to connect variables in your Python code with those in the shader.

Vertex Buffers

Vertex buffers are an important mechanism used by OpenGL shaders. Modern graphics hardware and OpenGL are designed to work with large amounts of 3D geometry. Consequently, several mechanisms are built into OpenGL to help transfer data from the program to the GPU. A typical setup to draw 3D geometry in a program will do the following:

1. Define arrays of coordinates, colors, and other attributes for each vertex of the 3D geometry.

2. Create a Vertex Array Object (VAO) and bind to it.

3. Create Vertex Buffer Objects (VBOs) for each attribute, defined on a per-vertex basis.

4. Bind to the VBO and set the buffer data using the predefined arrays.

5. Specify the data and location of vertex attributes to be used in the shader.

6. Enable the vertex attributes.

7. Render the data.

After you define the 3D geometry in terms of vertices, you create and bind to a vertex array object. VAOs are a convenient way to group geometry as multiple arrays of coordinates, colors, and so on. Then, for each attribute of each vertex, you create a vertex buffer object and set your 3D data into it. The VBO stores the vertex data in the GPU memory. Now, all that’s left is to connect the buffer data so you can access it from your shaders. You do this through calls that use the location of the variables employed in the shader.

Texture Mapping

Next let’s look at texture mapping, an important computer graphics technique that you’ll use in this chapter. Texture mapping is a way to give a scene a realistic feel with the help of a 2D picture of a 3D object (like the backdrop in a play). A texture is usually read from an image file and is stretched to drape over a geometric region by mapping the 2D coordinates (in the range [0, 1]) onto the 3D coordinates of the polygons. For example, Figure 9-6 shows an image draped onto one face of a cube. (I used GL_TRIANGLE_STRIP primitives to draw the cube faces, and the ordering of the vertices is indicated by the lines on the face.)

In Figure 9-6, the (0, 0) corner of the texture is mapped to the bottom-left vertex of the cube face. Similarly, you can see how the other corners of the texture are mapped, with the net effect that the texture is “pasted” onto this cube face. The geometry of the cube face itself is defined as a triangle strip, and the vertices zigzag from the bottom to the top left and from the bottom to the top right. Textures are extremely powerful and versatile computer graphics tools, as you’ll see in Chapter 11.

Image

Figure 9-6: Texture mapping

Displaying OpenGL

Now let’s talk about how to get OpenGL to draw stuff on the screen. The entity that stores all the OpenGL state information is called the OpenGL context. Contexts have a viewable, window-like area where the OpenGL drawings go, and you can have multiple contexts per process or run of an application, but only one context per thread can be current at a time. (Fortunately, the window toolkit will take care of most of the context handling.)

For your OpenGL output to appear in a window onscreen, you need the help of the operating system. For these projects, you’ll use GLFW, a lightweight cross-platform C library that lets you create and manage OpenGL contexts, display the 3D graphics in a window, and handle user input like mouse clicks and keyboard presses. (Appendix A covers the installation details for this library.)

Because you’re writing code in Python and not C, you’ll also use a Python binding to GLFW (glfw.py, available in the common directory in the book’s code repository), which lets you access all the GLFW features using Python.

Requirements

We’ll use PyOpenGL, a popular Python binding for OpenGL, for rendering and numpy arrays to represent 3D coordinates and transformation matrices.

The Code

Let’s build a simple Python application using OpenGL. To see the complete project code, skip ahead to “The Complete Code” on page 151.

Creating an OpenGL Window

The first order of business is to set up GLFW so you have an OpenGL window to render into. I’ve created a class called RenderWindow for this purpose.

Here is the initialization code for this class:

   class RenderWindow:
       """GLFW Rendering window class"""
       def __init__(self):

           # save current working directory
           cwd = os.getcwd()

           # initialize glfw
          glfw.glfwInit()

           # restore cwd
           os.chdir(cwd)

           # version hints
          glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MAJOR, 3)
           glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MINOR, 3)
           glfw.glfwWindowHint(glfw.GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE)
           glfw.glfwWindowHint(glfw.GLFW_OPENGL_PROFILE,
                               glfw.GLFW_OPENGL_CORE_PROFILE)

           # make a window
           self.width, self.height = 640, 480
           self.aspect = self.width/float(self.height)
          self.win = glfw.glfwCreateWindow(self.width, self.height,
                                            b'simpleglfw')
           # make the context current
          glfw.glfwMakeContextCurrent(self.win)

You initialize the GLFW library at . Then, starting at , you set the OpenGL version to the OpenGL 3.3 core profile. At , you create an OpenGL-capable window with the dimensions 640×480. Finally, at , you make the context current, and you’re ready to make OpenGL calls.

Next, you make some initialization calls.

         # initialize GL
        glViewport(0, 0, self.width, self.height)
        glEnable(GL_DEPTH_TEST)
        glClearColor(0.5, 0.5, 0.5, 1.0)

At , you set the viewport or screen dimensions (width and height) where OpenGL will render your 3D scene. At , turn on depth testing with GL_DEPTH_TEST. At , you set the color the background should become when glClear() is issued during rendering to 50 percent gray with an alpha setting of 1.0. (Alpha is a measure of the transparency of a pixel.)

Setting Callbacks

Next you register some event callbacks for user interface events within the GLFW window so you can respond to mouse clicks and keypresses.

       # set window callbacks
       glfw.glfwSetMouseButtonCallback(self.win, self.onMouseButton)
       glfw.glfwSetKeyCallback(self.win, self.onKeyboard)
       glfw.glfwSetWindowSizeCallback(self.win, self.onSize)

This code sets callbacks for mouse button presses, keyboard presses, and window resizing, respectively. Every time one of these events happens, the function registered as a callback is executed.

The Keyboard Callback

Let’s look at the keyboard callback:

     def onKeyboard(self, win, key, scancode, action, mods):
         #print 'keyboard: ', win, key, scancode, action, mods
        if action == glfw.GLFW_PRESS:
             # ESC to quit
             if key == glfw.GLFW_KEY_ESCAPE:
                self.exitNow = True
             else:
                 # toggle cut
                self.scene.showCircle = not self.scene.showCircle

The onKeyboard() callback is called every time a keyboard event happens. The arguments to the function arrive filled with useful information such as what type of event occurred (key-up versus key-down, for example) and which key was pressed. The code glfw.GLFW_PRESS says to look only for key-down, or PRESS, events . At , you set an exit flag if the ESC key is pressed. If any other key is pressed, you toggle a showCircle Boolean that will be passed into the fragment shader .

The Window-Resizing Event

Here is the handler for the window-resizing event:

      def onSize(self, win, width, height):
          #print 'onsize: ', win, width, height
          self.width = width
          self.height = height
          self.aspect = width/float(height)
         glViewport(0, 0, self.width, self.height)

Every time the window size changes, you call glViewport() to reset the dimensions for the graphics to ensure that the 3D scene is drawn correctly on the screen . You also store the dimensions in width and height and store the aspect ratio for the changed window in aspect.

The Main Loop

Now you come to the main loop of the program. (GLFW does not provide a default program loop.)

      def run(self):
          # initializer timer
         glfw.glfwSetTime(0)
          t = 0.0
         while not glfw.glfwWindowShouldClose(self.win) and not self.exitNow:
              # update every x seconds
             currT = glfw.glfwGetTime()
              if currT - t > 0.1:
                  # update time
                  t = currT
                  # clear
                 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

                  # build projection matrix
                 pMatrix = glutils.perspective(45.0, self.aspect, 0.1, 100.0)

                 mvMatrix = glutils.lookAt([0.0, 0.0, -2.0], [0.0, 0.0, 0.0],
                                            [0.0, 1.0, 0.0])
                  # render
                 self.scene.render(pMatrix, mvMatrix)
                  # step
                 self.scene.step()

                 glfw.glfwSwapBuffers(self.win)
                  # poll for and process events
                 glfw.glfwPollEvents()
          # end
          glfw.glfwTerminate()

In the main loop, glfw.glfwSetTime ()resets the GLFW timer to 0 . You’ll use this timer to redraw the graphics at regular intervals. A while loop starts at and exits only if the window is closed or exitNow is set to True. When the loop exits, glfw.glfwTerminate() is called to shut down GLFW cleanly.

Inside the loop, glfw.glfwGetTime() gets the current timer value , which you use to calculate the elapsed time since the last drawing. By setting a desired interval here (in this case, to 0.1 seconds or 100 milliseconds), you can adjust the rendering frame rate.

Next, at , glClear() clears the depth and color buffers and replaces them with the set background color to get ready for the next frame. At , you compute the projection matrix using the perspective() method defined in glutils.py (you’ll take a closer look at this in the next section). Here, you ask for a 45-degree field of view and a near/far plane distance of 0.1/100.0. Then, you set the modelview matrix at using the lookAt() method defined in glutils.py. Set the eye position at (0, 0, −2), looking at the origin (0, 0, 0) with an “up” vector of (0, 1, 0). Then, call the render() method on the scene object at , passing in these matrices, and at , call scene.step() so it can update the variables necessary for the time step. At , glfwSwapBuffers() is called, which swaps the back and front buffers, thus displaying your updated 3D graphic. The GLFW PollEvents() call at checks for any UI events and returns control to the while loop.

The Scene Class

Now let’s look at the Scene class, which is responsible for initializing and drawing the 3D geometry.

   class Scene:
       """ OpenGL 3D scene class"""
       # initialization
       def __init__(self):
           # create shader
          self.program = glutils.loadShaders(strVS, strFS)

          glUseProgram(self.program)

In the Scene class constructor, you first compile and load the shaders. For this, I’ve used the utility method loadShaders() defined in glutils.py, which provides a convenient wrapper around the series of OpenGL calls required to load the shaders from the string, compile them, and link them into an OpenGL program object. Because OpenGL is a state machine, you need to set the code to use a particular “program object” (because a project could have multiple programs) using the glUseProgram() call at .

Now connect the variables in the Python code with those in the shaders.

          self.pMatrixUniform = glGetUniformLocation(self.program, b'uPMatrix')
          self.mvMatrixUniform = glGetUniformLocation(self.program, b'uMVMatrix')
          # texture
          self.tex2D = glGetUniformLocation(self.program, b'tex2D')

This code uses the glGetUniformLocation() method to retrieve the locations of the variables uPMatrix, uMVMatrix, and tex2D defined inside the vertex and fragment shaders. These locations can then be used to set the values for the shader variables.

Defining the 3D Geometry

Let’s first define the 3D geometry for the square.

          # define triangle strip vertices
         vertexData = numpy.array(
              [-0.5, -0.5, 0.0,
               0.5, -0.5, 0.0,
               -0.5, 0.5, 0.0,
               0.5, 0.5, 0.0], numpy.float32)

          # set up vertex array object (VAO)
         self.vao = glGenVertexArrays(1)
          glBindVertexArray(self.vao)
          # vertices
         self.vertexBuffer = glGenBuffers(1)
          glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer)
          # set buffer data
         glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData,
                       GL_STATIC_DRAW)
          # enable vertex array
         glEnableVertexAttribArray(0)
          # set buffer data pointer
         glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, None)
          # unbind VAO
         glBindVertexArray(0)

At , you define the array of vertices of the triangle strip used to draw the square. Think of a square of side 1.0 centered at the origin. The bottom-left vertex of this square has the coordinates (–0.5, –0.5, 0.0); the next vertex (the bottom-right one) has the coordinates (0.5, –0.5, 0.0); and so on. The order of the coordinates is that of a GL_TRIANGLE_STRIP. At , you create a VAO. Once you bind to this VAO, all upcoming calls will be bound to it. At , you create a VBO to manage the rendering of the vertex data. Once the buffer is bound, the line at sets the buffer data from the vertices you have defined.

Now you need to enable the shaders to access this data, and you do this at ; glEnableVertexAttribArray() is called with an index of 0 because that is the location you have set in the vertex shader for the vertex data variable. At , glVertexAttribPointer() sets the location and data format of the vertex attribute array. The index of the attribute is 0, the number of components is 3 (you use 3D vertices), and the data type of the vertex is GL_FLOAT. You unbind the VAO at so other related calls don’t interfere with it. In OpenGL, it’s best practice to reset states when you are done. OpenGL is a state machine, so if you leave things in a mess, they will remain that way.

The following code loads the image as an OpenGL texture:

          # texture
          self.texId = glutils.loadTexture('star.png')

The texture ID returned will be used later in rendering.

Next we’ll update variables in the Scene object to make the square rotate on the screen:

       # step
       def step(self):
           # increment angle
          self.t = (self.t + 1) % 360
           # set shader angle in radians
          glUniform1f(glGetUniformLocation(self.program, 'uTheta'),
                       math.radians(self.t))

At , you increment the angle variable t and use the modulus operator (%) to keep this value within [0, 360]. You then use the glUniform1f() method at to set this value in the shader program. As before, you use glGetUniformLocation() to get the location of the uTheta angle variable from the shader, and the Python math.radians() method to convert the angle from degrees to radians.

Now let’s look at the main rendering code:

       # render
       def render(self, pMatrix, mvMatrix):
           # use shader
          glUseProgram(self.program)

           # set projection matrix
          glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix)

           # set modelview matrix
           glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix)

           # show circle?
          glUniform1i(glGetUniformLocation(self.program, b'showCircle'),
                       self.showCircle)

           # enable texture
          glActiveTexture(GL_TEXTURE0)
          glBindTexture(GL_TEXTURE_2D, self.texId)
          glUniform1i(self.tex2D, 0)

           # bind VAO
          glBindVertexArray(self.vao)
           # draw
          glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
           # unbind VAO
          glBindVertexArray(0)

At , set up the rendering to use the shader program. Then, starting at , set the computed projection and modelview matrices in the shader using the glUniformMatrix4fv() method. You use glUniform1i() at to set the current value of the showCircle variable in the fragment shader. OpenGL has a concept of multiple texture units, and glActiveTexture() activates texture unit 0 (the default). At , you bind the texture ID you generated earlier to activate it for rendering. The sampler2D variable in the fragment shader is set to texture unit 0 at . At , you bind to the VAO you created previously. Now you see the benefit of using VAOs: you don’t need to repeat a whole bunch of vertex buffer–related calls before the actual drawing. At , glDrawArrays() is called to render the bound vertex buffers. The primitive type is a triangle strip, and there are four vertices to be rendered. You unbind the VAO at , which is always a good coding practice.

Defining the GLSL Shaders

Now let’s look at the most exciting part of the project—the GLSL shaders. This is the vertex shader:

   #version 330 core

 layout(location = 0) in vec3 aVert;

 uniform mat4 uMVMatrix;
   uniform mat4 uPMatrix;
   uniform float uTheta;

 out vec2 vTexCoord;

   void main() {
       // rotational transform
     mat4 rot = mat4(
                   vec4(cos(uTheta), sin(uTheta), 0.0, 0.0),
                   vec4(-sin(uTheta), cos(uTheta), 0.0, 0.0),
                   vec4(0.0, 0.0, 1.0, 0.0),
                   vec4(0.0, 0.0, 0.0, 1.0)
                   );
      // transform vertex
    gl_Position = uPMatrix * uMVMatrix * rot * vec4(aVert, 1.0);
      // set texture coordinate
    vTexCoord = aVert.xy + vec2(0.5, 0.5);
   }

At , you use the layout keyword to set explicitly the location of the vertex attribute aVert—to 0, in this case. Starting at , declare uniform variables: projection and modelview matrices and the rotation angle. These will be set from the Python code. At , you set a 2D vector vTexCoord as an output from this shader. This will be available as an input to the fragment shader. In the main() method in the shader, set up a rotation matrix at , which rotates around the z-axis by a given angle. You compute gl_Position at using a concatenation of projection, modelview, and rotation matrices. At , you set up a 2D vector as a texture coordinate. You may recall that you defined the triangle strip for a square centered at the origin with side 1.0. Because texture coordinates are in the range [0, 1], you can generate these from the vertex coordinates by adding (0.5, 0.5) to the x- and y-values. This also demonstrates the power and immense flexibility of shaders for your computations. Texture coordinates and other variables are not sacrosanct; you can set them to just about anything.

Now let’s look at the fragment shader:

   #version 330 core

 in vec4 vCol;
   in vec2 vTexCoord;

 uniform sampler2D tex2D; 
 uniform bool showCircle;

 out vec4 fragColor;

   void main() {
       if (showCircle) {
           // discard fragment outside circle 
         if (distance(vTexCoord, vec2(0.5, 0.5)) > 0.5) {
               discard;
           }
           else { 
             fragColor = texture(tex2D, vTexCoord);
           }
       }
           else { 
             fragColor = texture(tex2D, vTexCoord);
           }
   }

Starting at , you define inputs to the fragment shader—the same color and texture coordinate variables you set as output in the vertex shader. Recall that the fragment shader operates on a per-pixel basis, so the values set for these variables are those for the current pixel, interpolated across the polygon. You declare a sampler2D variable at , which is linked to a particular texture unit and is used to look up the texture value. At , declare a Boolean uniform flag showCircle, which is set from the Python code, and at , declare fragColor as the output from the fragment shader. By default, this goes to the screen (after final frame buffer operations such as depth testing and blending).

If the showCircle flag is not set, at , you use the GLSL texture() method to look up the texture color value using the texture coordinate and the sampler. In effect, you are just texturing the triangle strip using the star image. But if the showCircle flag is true, at , use the GLSL built-in method distance to check how far the current pixel is from the center of the polygon. It uses the (interpolated) texture coordinates for this purpose, which are passed in by the vertex shader. If the distance is greater than a certain threshold (0.5 in this case), it calls the GLSL discard method, which drops the current pixel. If the distance is less than the threshold, you set the appropriate color from the texture . Basically, what this does is ignore pixels that are outside a circle with a radius of 0.5 centered at the midpoint of the square, thus cutting the polygon into a circle when showCircle is set.

The Complete Code

The complete code for our simple OpenGL application resides in two files: simpleglfw.py, which has the code shown here and can be found at https://github.com/electronut/pp/tree/master/simplegl/, and glutils.py, which includes some helper methods to make life easier and can be found in the common directory.

import OpenGL
from OpenGL.GL import *

import numpy, math, sys, os
import glutils

import glfw

strVS = """
#version 330 core

layout(location = 0) in vec3 aVert;

uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform float uTheta;

out vec2 vTexCoord;

void main() {
    // rotational transform
    mat4 rot = mat4(
                vec4(cos(uTheta), sin(uTheta), 0.0, 0.0),
                vec4(-sin(uTheta), cos(uTheta), 0.0, 0.0),
                vec4(0.0, 0.0, 1.0, 0.0),
                vec4(0.0, 0.0, 0.0, 1.0)
                );
    // transform vertex
    gl_Position = uPMatrix * uMVMatrix * rot * vec4(aVert, 1.0);
    // set texture coordinate
    vTexCoord = aVert.xy + vec2(0.5, 0.5);
}
"""
strFS = """
#version 330 core

in vec2 vTexCoord;

uniform sampler2D tex2D;
uniform bool showCircle;

out vec4 fragColor;

void main() {
    if (showCircle) {

       // discard fragment outside circle
       if (distance(vTexCoord, vec2(0.5, 0.5)) > 0.5) {
           discard;
       }
       else {
           fragColor = texture(tex2D, vTexCoord);
       }
    }
    else {
         fragColor = texture(tex2D, vTexCoord);
    }
}
"""



class Scene:
    """ OpenGL 3D scene class"""
    # initialization
    def __init__(self):
        # create shader
        self.program = glutils.loadShaders(strVS, strFS)

        glUseProgram(self.program)

        self.pMatrixUniform = glGetUniformLocation(self.program, b'uPMatrix')
        self.mvMatrixUniform = glGetUniformLocation(self.program, b'uMVMatrix')
        # texture
        self.tex2D = glGetUniformLocation(self.program, b'tex2D')

        # define triange strip vertices
        vertexData = numpy.array(
            [-0.5, -0.5, 0.0,
             0.5, -0.5, 0.0,
             -0.5, 0.5, 0.0,
             0.5, 0.5, 0.0], numpy.float32)

        # set up vertex array object (VAO)
        self.vao = glGenVertexArrays(1)
        glBindVertexArray(self.vao)
        # vertices
        self.vertexBuffer = glGenBuffers(1)
        glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer)
        # set buffer data
        glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData,
                     GL_STATIC_DRAW)
        # enable vertex array
        glEnableVertexAttribArray(0)
        # set buffer data pointer
        glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, None)
        # unbind VAO
        glBindVertexArray(0)

        # time
        self.t = 0

        # texture
        self.texId = glutils.loadTexture('star.png')

        # show circle?
        self.showCircle = False

   # step
   def step(self):
       # increment angle
       self.t = (self.t + 1) % 360
       # set shader angle in radians
       glUniform1f(glGetUniformLocation(self.program, 'uTheta'),
                   math.radians(self.t))

   # render
   def render(self, pMatrix, mvMatrix):
       # use shader
       glUseProgram(self.program)

       # set projection matrix
       glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix)

       # set modelview matrix
       glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix)

       # show circle?
       glUniform1i(glGetUniformLocation(self.program, b'showCircle'),
                   self.showCircle)

       # enable texture
       glActiveTexture(GL_TEXTURE0)
       glBindTexture(GL_TEXTURE_2D, self.texId)
       glUniform1i(self.tex2D, 0)

       # bind VAO
       glBindVertexArray(self.vao)
       # draw
       glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
       # unbind VAO
       glBindVertexArray(0)



class RenderWindow:
    """GLFW Rendering window class"""
    def __init__(self):

        # save current working directory
        cwd = os.getcwd()

        # initialize glfw - this changes cwd
        glfw.glfwInit()

        # restore cwd
        os.chdir(cwd)

        # version hints
        glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MAJOR, 3)
        glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MINOR, 3)
        glfw.glfwWindowHint(glfw.GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE)
        glfw.glfwWindowHint(glfw.GLFW_OPENGL_PROFILE,
                            glfw.GLFW_OPENGL_CORE_PROFILE)

        # make a window
        self.width, self.height = 640, 480
        self.aspect = self.width/float(self.height)
        self.win = glfw.glfwCreateWindow(self.width, self.height,
                                         b'simpleglfw')
        # make context current
        glfw.glfwMakeContextCurrent(self.win)

        # initialize GL
        glViewport(0, 0, self.width, self.height)
        glEnable(GL_DEPTH_TEST)
        glClearColor(0.5, 0.5, 0.5, 1.0)

        # set window callbacks
        glfw.glfwSetMouseButtonCallback(self.win, self.onMouseButton)
        glfw.glfwSetKeyCallback(self.win, self.onKeyboard)
        glfw.glfwSetWindowSizeCallback(self.win, self.onSize)

        # create 3D
        self.scene = Scene()

        # exit flag
        self.exitNow = False



    def onMouseButton(self, win, button, action, mods):
        #print 'mouse button: ', win, button, action, mods
        pass

    def onKeyboard(self, win, key, scancode, action, mods):
        #print 'keyboard: ', win, key, scancode, action, mods
        if action == glfw.GLFW_PRESS:
            # ESC to quit
            if key == glfw.GLFW_KEY_ESCAPE:
                self.exitNow = True
            else:
                # toggle cut
                self.scene.showCircle = not self.scene.showCircle

    def onSize(self, win, width, height):
        #print 'onsize: ', win, width, height
        self.width = width
        self.height = height
        self.aspect = width/float(height)
        glViewport(0, 0, self.width, self.height)


    def run(self):
        # initializer timer
        glfw.glfwSetTime(0)
        t = 0.0
        while not glfw.glfwWindowShouldClose(self.win) and not self.exitNow:
            # update every x seconds
            currT = glfw.glfwGetTime()
            if currT - t > 0.1:
                # update time
                t = currT
                # clear
                glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

                # build projection matrix
                pMatrix = glutils.perspective(45.0, self.aspect, 0.1, 100.0)

                mvMatrix = glutils.lookAt([0.0, 0.0, -2.0], [0.0, 0.0, 0.0],
                                          [0.0, 1.0, 0.0])
                # render
                self.scene.render(pMatrix, mvMatrix)
                # step
                self.scene.step()

                glfw.glfwSwapBuffers(self.win)
                # poll for and process events
                glfw.glfwPollEvents()

        # end
        glfw.glfwTerminate()

    def step(self):
        # clear
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

        # build projection matrix
        pMatrix = glutils.perspective(45.0, self.aspect, 0.1, 100.0)

        mvMatrix = glutils.lookAt([0.0, 0.0, -2.0], [0.0, 0.0, 0.0],
                                  [0.0, 1.0, 0.0])
        # render
        self.scene.render(pMatrix, mvMatrix)
        # step
        self.scene.step()

        glfw.SwapBuffers(self.win)
        # poll for and process events
        glfw.PollEvents()

# main() function
def main():
    print("Starting simpleglfw. "
          "Press any key to toggle cut. Press ESC to quit.")
    rw = RenderWindow()
    rw.run()

# call main
if __name__ == '__main__':
    main()

Running the OpenGL Application

Here is a sample run of the project:

$python simpleglfw.py

The output will be the same as shown in Figure 9-1.

Now let’s take a quick look at some of the utility methods defined in glutils.py. This one loads an image into an OpenGL texture:

   def loadTexture(filename):
       """load OpenGL 2D texture from given image file""" 
     img = Image.open(filename) 
     imgData = numpy.array(list(img.getdata()), np.int8) 
     texture = glGenTextures(1) 
     glBindTexture(GL_TEXTURE_2D, texture) 
     glPixelStorei(GL_UNPACK_ALIGNMENT, 1) 
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE) 
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
      glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR) 
     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.size[0], img.size[1],
                    0, GL_RGBA, GL_UNSIGNED_BYTE, imgData)
       return texture

The loadTexture() function uses the Python Imaging Library (PIL) Image module at to read the image file. Then it gets the data out of the Image object onto an 8-bit numpy array at . It creates an OpenGL texture object at , which is a prerequisite to doing anything with textures in OpenGL. At , you perform the now familiar binding to the texture object so all further texture-related settings apply to this object. At , you set the unpacking alignment of data to 1, which means the image data will be considered to be 1-byte or 8-bit data by the hardware. Starting at , you tell OpenGL what to do with the texture at the edges. In this case, it says to just clamp the texture color to the edge of the geometry. (In specifying texture coordinates, the convention is to use the letters S and T for the axes instead of x and y.) At and the following line, you specify the kind of interpolation to be used when the texture is stretched or compressed to map onto a polygon. In this case, linear filtering is specified. At , you set the image data in the bound texture. At this point, the image data is transferred to graphics memory, and the texture is ready for use.

Summary

Congratulations on completing your first program using Python and OpenGL. You have begun your journey into the fascinating world of 3D graphics programming.

Experiments!

Here are some ideas for modifying this project.

1. The vertex shader in this project rotates the square around the z-axis (0, 0, 1). Can you make it rotate around the axis (1, 1, 0)? You can do this in two ways: first, by modifying the rotation matrix in the shader, and second, by computing this matrix in the Python code and passing it as a uniform into the shader. Try both!

2. In the project, the texture coordinates are generated inside the vertex shader and passed to the fragment shader. This is a trick, and it works only because of the convenient values chosen for the vertices of the triangle strip. Pass the texture coordinates as a separate attribute into the vertex shader, similar to how the vertices are passed in. Now, can you make the star texture tile across the triangle strip? Instead of displaying a single star, you want to produce a 4×4 grid of stars on the square. (Hint: use texture coordinates greater than 1.0 and set GL_TEXTURE_WRAP_S/T parameters in glTexParameterf() to GL_REPEAT.)

3. By changing just your fragment shader, can you make your square look like Figure 9-7? (Hint: use the GLSL sin() function.)

Image

Figure 9-7: Using the fragment shader to block out concentric circles

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset