168 11.Making3DStereoscopicGames
Figure 11.6. Without a floating window, some objects might be more visible to one eye,
but by using a floating window we can prevent such situations.
where d is the distance to the viewing plane, s is the separation distance between
the left and right cameras, and fov is the horizontal field-of-view angle.
It is also possible to go one step further and use a dynamic floating window
that has a more negative parallax than the closest object, where we avoid that part
of an object that becomes more visible to one eye, as shown in Figure 11.6. This
creates the illusion of moving the screen surface forward. It is also possible to
minimize the difference between the frame and its surface using a graduation or
motion blur at the corners of the screen.
We might also have to consider limiting the maximum parallax to prevent
any divergence that occurs when the separation of an object for both eyes on the
screen is larger than the gap between our eyes (
6.4 cm
). Thankfully, the
HDMI 1.4 specifications allow retrieving the size of the TV, which can be used
to calibrate the camera separation. Depending on the screen size, the number of
pixels N that are contained within this distance varies as
interocular pixels
screen
dw
N
w
,
where
interocular
d
is the distance between the eyes measured in centimeters,
p
ixels
w
is
the width of the screen in pixels, and
screen
w
is the width of the screen measured in
centimeters. For example, for a 46-inch TV and a resolution of
1
920 108
0
pix-
els, the number of pixels N for a typical human interocular distance is about 122
pixels.
Floating
window
Floating
window
Left image Right image
11.5TechnicalConsiderations 169
Figure 11.7. Frame packing at 720p.
11.5TechnicalConsiderations
Creating a stereoscopic 3D scene impacts the run-time performance because
there is an additional workload involved. At full resolution, this implies render-
ing the scene twice. For game engines that are heavily pixel-bound, such as many
deferred renderers, this might be critical. Also, the frame buffer and depth buffer
need to be larger, especially when using the frame-packing mode exposed by the
HDMI 1.4 specification, as shown in Figure 11.7.
To overcome this additional workload some hardware provides an internal
scaler that lets us keep the same memory footprint and pixel count as a native
monoscopic application with a display mode such as 640 1470
. An additional
problem is related to swapping the front and back buffers. Monoscopic games
can choose to either wait for the next vertical blank or perform an immediate flip
to keep a higher frame rate at a cost of possible screen tearing. With stereoscopic
games that rely on the frame-packing mode, doing so would generate tearing in
one eye only, which is very uncomfortable to watch. As a consequence, it might
be a better choice to run at a slower fixed frequency.
11.6SameScene,BothEyes,andHowtoOptimize
When rendering the scenes, the data needs to be synchronized for both eyes to
prevent artifacts. Thankfully, some elements of the scene are not view-dependent
and can therefore be shared and computed once. Consider the following typical
game loop:
Left
Right
1280 pixels
1470 pixels
720 pixels
720 pixels
30-pixel gap filled with black
170 11.Making3DStereoscopicGames
while (notdead)
{
updateSimulation(time);
renderShadowMaps();
renderScene(LeftEye, RightEye);
renderHUD(LeftEye, RightEye);
vsyncThenFlip();
}
Figure 11.8 presents ways to minimize the impact for both the GPU and the
CPU by ensuring that view-independent render targets are shared. Some effects
that are view-dependent, such as reflections, can sometimes be shared for both
views if the surface covered is relatively small, as it often is for mirrors. This
leads to artifacts, but they might be acceptable. On some platforms like the
PlayStation 3, it is also possible to perform some effects asynchronously on the
SPU, such as cascaded shadow maps. In particular, the CPU overhead can also be
reduced by caching the relevant rendering states.
It is also possible to use multiple render targets (MRTs) to write to both left
and right frame buffers in a single pass. This technique can be used to write to
both render targets in a single pass when rendering objects at the screen level or
when applying full-screen postprocessing effects, such as color enhancement or
crosstalk reduction. This is depicted in Figure 11.9.
Render for each eye
Back buffer
Depth/stencil buffer
HDR
Blur
Bloom
Mirrors
Parallax mapping
Depth of field
...
Render once for both eyes
Shadow maps
Spot light maps projected in the scene
Offscreen surfaces
Figure 11.8. Scene management where view-independent render targets are computed
once.
11.6SameScene,BothEyes,andHowtoOptimize 171
Figure 11.9. Multiple render targets allow us to write to both the left and right frame
buffers in a single pass.
Some GPUs flush the rendering pipeline when a new surface is bound as a
render target. This might lead to a performance hit if the renderer frequently
swaps surfaces for the left and right eyes. A simple solution for avoiding this
penalty consists of binding a single surface for both eyes and then moving the
viewport between left and right rendering positions, as illustrated in Listing 11.1.
setRenderStates();
setLeftEyeProjection();
setLeftEyeViewport(); // surface.x = 0, surface.y = 0
Drawcall();
setRightEyeProjection();
setRightEyeViewport(); // surface.x = 0, surface.y = 720 + 30
Drawcall();
// You can carry on with the same eye for the next object to
// minimize the change of projection matrix and viewport.
setRenderStates();
Drawcall();
Fragment
program
MRT0
MRT1
Left scene
Right scene
Input textures
Render targets
172 11.Making3DStereoscopicGames
setLeftEyeProjection();
setLeftEyeViewport();
Drawcall();
Listing 11.1. This code demonstrates how the images for the left and right eyes can be combined
in a single render target by moving the viewport.
11.7SceneTraversal
To improve the scene traversal for both cameras, it is possible to take into ac-
count the similarities between both views. This can be used to improve the scene
management at a lower granularity. In fact, if we assume that a point in the rela-
tive right-eye viewing position is
right
,,
x
yz
P
, then the observation that the
corresponding point for the relative left-eye viewing position is

left
,,
x
eyzP ,
where e is the camera separation, allows us to improve the scene traversal, where
a normal vector
N to a polygon in one eye is also valid for the other eye. This
could lead to improved hidden surface removal and could also help us discard
objects for both eyes using conservative occluders for occlusion queries. This
helps minimize the CPU and GPU workload. On the PlayStation 3 platform, a
common approach is to perform backface culling using SPU programs. There-
fore, it is possible to perform backface culling for both views in a single pass.
This function consists of performing a dot product and testing the result. If we
compute the view separately, this involves the following operations:
23 3 1
662.
multiplication addition comparison
multiplication addition comparison


This can be improved, and we need to consider the following cases:
Polygon is front-facing for both views.
Polygon is back-facing for both views.
Polygon is front-facing for one view and back-facing for the other view.
Let
N be the normal of a triangle, let
,,
LL
x
yz
V
be the direction from one of
the triangle’s vertices to the left camera position, and let
,,
R
R
x
yz
V
be the
direction from the same vertex to the right camera position. The triangle is front-
facing for the left camera if
0
L
N
V
, and it is front-facing for the right camera
if
0
R

N
V
. Using
LR
x
x
e

, we need
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset