12.5TheMultiviewBuffer 189
12.5TheMultiviewBuffer
Multiview rendering requires rendering separate views to distinct targets so that
they can be mixed as required. The multiview buffer concept in this architecture
aims to encapsulate important properties of the render targets that can be used in
multiview pipelines.
The instances of multiview buffers are created by the application and at-
tached to the viewports using multiview buffer configuration parameters. These
parameters are designed to allow easy high-level configuration of internal re-
sources (such as textures, render buffers, and frame buffers) that will be created.
Although the multiview buffer concept is not designed for extensibility through
class inheritance, the parameters can provide the required robustness to the appli-
cation developer and can be extended by updating this single interface when re-
quired.
After analyzing possible multiview configurations, we have observed that
there are two basic types of multiview buffers:
On-target buffers for which we render to the viewport’s render target
directly.
Off-target buffers that create and manage their own resources as render
targets.
An on-target multiview buffer uses the attached viewport’s render surface instead
of creating any new (offscreen) surfaces. A final compositing phase may not be
needed when an on-target multiview buffer is used because the multiview render-
ing output is stored in a single surface. The rendering pipeline can still be spe-
cialized for per-view operations using the multiview compositor attachments. For
example, to achieve on-target anaglyph-based rendering, an attached compositor
can select per-view color write modes, in turn separating the color channels of
each view, or a compositor can select different rendering regions on the same
surface. Also, OpenGL quad-buffer stereo mode can be automatically managed
as an on-target multiview buffer since no additional surfaces need to be set up
other than the operating system window surface, and its usage depends on target
surface’s native support for left and right view buffering.
An off-target multiview buffer renders to internally managed offscreen sur-
faces instead of the attached viewport’s render surface, and it can thus be config-
ured more flexibly and independently from the target viewport. Offscreen
rendering, inherent in off-target multiview buffers, allows rendering the content
of each view to different surfaces (such as textures). The application viewport
190 12.AGenericMultiviewRenderingEngineArchitecture
surface is later updated with the composite image generated by the attached mul-
tiview compositor. If the composition (merge) step of the multiview display de-
vice requires that complex patterns be sampled from each view, as is common in
lenticular-based displays, or if the per-view outputs need to be stored in separate
resources with different configurations (sizes, component types, etc.) as a mul-
tiview optimization step, using an off-target multiview buffers is required.
Some additional aspects of off-target buffer configurations are the following:
The color channel targets need to be separated for each view.
The depth and stencil targets can be shared between different views if the
view-specific images are rendered sequentially. Clearing depth/stencil buff-
ers after rendering has been completed for each view ensures that each view
has its own consistent depth/stencil buffer.
Specific to OpenGL, a multiview buffer can be assigned a single frame buff-
er, as opposed to switching frame buffer objects in each view, and the texture
attachments may be dynamic. Rendering performance may differ depending
on the hardware and the rendering order used.
For off-target buffers, the sizes of internal render surfaces are based on the
attached viewport render surface size since the internal view-specific surfac-
es are later merged into the viewport render surface.
The multiview buffers can apply additional level-of-detail settings. Possible
approaches are discussed in Section 12.8.
12.6TheMultiviewCompositor
The multiview compositor component is responsible for merging a given off-
target multiview buffer (the render data for specific views) into the target view-
port, and it can also be used to define view-specific rendering states. Since the
compositing logic is heavily dependent on the target hardware configuration, our
architecture supports an extensible multiview compositor design, allowing the
programmer to define hardware-specific view-merge routines by inheriting from
a base class interface.
The composition phase requires that a multiview buffer provide the rendering
results of the different views in separate render targets. Thus, when an on-target
multiview buffer is used, there is no need to define a compositing method. Yet,
using an off-target multiview buffer and multiview compositor provides a more
flexible mechanism, while introducing only slight data, computation, and man-
agement overheads.
12.7RenderingManagement 191
Since the multiview buffers use GPU textures to store render results, the mul-
tiview compositors can process the texture data on the GPU with shaders, as
shown in Listings 12.3 and 12.4. Using a shader-driven approach, the view buff-
ers can be upsampled or downsampled in the shaders, using available texture fil-
tering options provided by the GPUs (such as nearest or linear filtering).
in vec2 vertexIn;
out vec2 textureCoord;
void main()
{
textureCoord = vertexIn.xy * 0.5 + 0.5;
glPosition = vec4(vertexIn.xy, 0.0, 1.0);
}
Listing 12.3. A sample vertex shader for a parallax-based multiview rendering composition phase.
uniform sampler2D viewL;
uniform sampler2D viewR;
varying vec2 textureCoord;
void main()
{
vec4 colorL = texture2D(viewL, textureCoord);
vec4 colorR = texture2D(viewR, textureCoord);
// Creating the stripe pattern for left-right view.
gl_FragColor = colorR;
if (mod(gl_FragCoord.x, 2.0) > 0.5) gl_FragColor = ColorL;
}
Listing 12.4. A sample fragment shader for a parallax-based multiview rendering composition
phase.
12.7RenderingManagement
Given the basic multiview components, rendering an object for a specific view is
achieved through the following steps:
192 12.AGenericMultiviewRenderingEngineArchitecture
Activate a specific view on a multiview camera and update the projection
and view matrices.
Activate a specific view on a multiview buffer.
Activate view-specific object materials and geometries, as part of an object
level-of-detail (LOD) system (see Section 12.8).
After all objects are rendered to all of the views, the multiview compositor for
the viewport can process the view outputs and generate the final multiview im-
age.
Once the rendering requirements of an object-view pair are known, there are
two options for rendering the complete scene, as shown in Figure 12.8. In the
first case, a specific view is activated only once, and all of the visible objects are
rendered for that view. This process is continued until all of the views are com-
pleted, and such an approach keeps the frame target “hot” to avoid frequent
frame buffer swapping. In the second case, each object is activated only once,
and it is rendered to all viewports sequentially, this time keeping the object “hot.”
This approach can reduce vertex buffer or render state switches if view-specific
geometry/render state data is not set up. Also, with this approach, the camera
should cache projection and view matrix values for each view since the active
view is changed very frequently. Depending on the setup of the scene and the
number of views, the fastest approach may differ. A mixed approach is also pos-
sible, where certain meshes in the scene are processed once into multiple views
and the rest are rendered as a view-specific batch.
Figure 12.8. Rendering order considerations for multiview pipelines.
Obj1 View1
Obj2 View2
Obj3 View3
Obj4 View4
Obj1 View1
Obj2 View2
Obj3 View3
Obj4 View4
For each view, render all objects in the scene. For each object in the scene, render to all views.
12.8RenderingOptimizations 193
SharedDatabetweenDifferentViews
It is possible to make use of the coherence between different views during ren-
dering as follows:
Most importantly, the same scene data is used to render the 3D scene (while
this can be extended by using a multiview object LOD system). As a result,
animations modifying the scene data need to be only applied once.
Object or light culling can be applied once per frame using a single shared
frustum for multiview camera objects, containing the frustums of view-
specific internal cameras.
In summary, only the effective drawing time of a specific viewport is affect-
ed when multiview rendering is activated. Other than an increase in the number
of draw calls, the multiview composition step also costs render time, especially in
low-end configurations such as mobile devices, and it should be optimally im-
plemented. For example, on-target multiview buffers can be preferred to avoid an
additional compositing phase if per-view compositing logic can be applied at
render time by regular 3D pipelines. With such an approach, color write modes in
the graphics pipeline can be used to set up regular anaglyph rendering, or stencil
testing can be used to create per-view compositing patterns.
12.8RenderingOptimizations
The aim of the multiview rendering optimizations discussed in this section is to
provide some basic building blocks that can help programmers reduce total ren-
dering time without sacrificing the perceived quality of the final result. Since,
according to binocular suppression theory, one of the two eyes can suppress the
other eye, the non-dominant eye can receive a lower quality rendering while not
reducing the effective quality of the multiview rendering of a scene. A recent
study [Bulbul et al. 2010] introduced a hypothesis claiming that “if the intensity
contrast of the optimized rendering for a view is lower than its original rendering,
then the optimized rendering provides the same percept as if it were not opti-
mized.” This and similar guidelines can be studied, implemented and tested to be
able to optimize a multiview rendering pipeline and thus to render more complex
scenes in real time.
MultiviewLevelofDetailforObjects
In addition to the object distance parameter, engines supporting multiview archi-
tectures can introduce a new detail parameter, the active view number, and use
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset