R. Brémond; N.-T. Dang; C. Villa Paris Est University, Marne-la-Vallée, France
Driving simulation could benefit from high dynamic range computer graphics images, as well as high dynamic range display devices. In this chapter, we review the potential benefits of this technology for various uses of driving simulators, and discuss the obstacles which make it difficult today to promote these technologies in driving simulation applications. Benefits are first expected in behavioral studies, where a close link between the simulated and the real visual experience improves the validity of the driving simulation experiments. This is the case, for instance, in night driving simulation, where very dark areas need to be displayed together with very bright ones.
Driving simulation; Visual perception; Visual performance; High dynamic range display; Tone mapping; Computer graphics
Driving simulation has become a quite common virtual reality tool, addressing various fields, such as video games, vehicle design, driving lessons, and behavioral studies (Fisher et al., 2011). As in other fields of virtual reality, driving simulator providers have developed a number of visual effects in order to render a variety of environmental situations. For instance, night driving, rain, fog, and glare can be simulated with the use of state-of-the-art techniques from the computer graphics (CG) literature to mimic either a physical phenomenon (eg, beam pattern in automotive lighting) or its effect on driver perception (eg, fog), and to minimize the perceptual gap between the displayed image and the computed image with tone mapping operators (TMOs) (Reinhard et al., 2005). According to Andersen (2011), some visual factors are critical for the external validity of driving simulations (ie, validity with respect to actual driving). He emphasizes luminance and contrast as the most important visual factors for the main critical driving situations: night driving, and driving in rain, fog, and complex urban environments.
In this context, high dynamic range (HDR) rendering and display may improve the realism of a number of visual cues relevant to driving. However, the implementation of CG algorithms is not straightforward, and a trade-off is needed among cost (financial and computational), performance, and market (or user) demand. Moreover, in many driving situations, the driver’s visibility is good, and looking at the relevant visual cues is easy, both in real and in virtual environments. In these situations, the benefit of HDR images is often seen as too low, considering the associated costs and constraints.
In this chapter, we discuss to what extent HDR rendering and display has improved, or can improve driving simulators, and why HDR has not invaded the field yet. We will focus on driving simulation as a tool for behavioral studies; automotive design is considered in another chapter of this book.
Section 21.2 provides some evidence that HDR issues are not considered in most current driving simulator studies, even in vision-demanding situations. Then, we argue (Section 21.3) that some low-level visual factors, relevant to driving, are photometry dependent, and thus should benefit from a real-time HDR imaging and display system. After a short discussion in Section 21.4 of HDR rendering and display issues in CG, Section 21.5 reviews the few existing driving simulation studies where photometric issues and HDR components have been considered, and discusses what kind of realism is now available, or will be soon, in terms of realism, whether physical, perceptual, or functional (Ferwerda and Pattanaik, 1997), and what for. This includes a small number of simulations with a true HDR display device. We conclude (Section 21.6) by discussing the reasons why HDR has not yet invaded the field of driving simulation: it is interesting, in our opinion, to better understand the obstacles in order to push the development of HDR rendering and display with better efficiency. We also present some technical and experimental perspectives, toward a more intensive development of HDR video for driving simulations and vision science issues relevant in driving situations.
One cannot say that driving simulator developers do not care about rendering issues. For instance, some level of realism may be important in video games (they ought to be the state of the art with respect to the video game market), and improves the driver’s sense of immersion. But perceptive realism is needed only when visual performance or visual appearance issues arise, such as at night, in low-visibility situations, or in complex situations where the visual saliency of objects in the scene may attract the driver’s visual attention (Hughes and Cole, 1986), and needs to be carefully simulated if one wants the driver’s visual behavior in the simulator to be similar to a real driver’s behavior.
This is the main point: perceptual realism is not a key issue in mainstream applications. For instance, in a recent review of the driver’s visual control of the vehicle’s lateral position (Lappi, 2014), low-visibility conditions are not even mentioned. Only a few people around the world are concerned with visual performance or visual appearance in a car: first, because it helps in the vehicle design (see Chapter 19); second, because it is needed for behavioral realism in driving situations where visual perception is a complex task (Brémond et al., 2014).
Night driving is a good example of a driving situation where both high and low luminance levels are expected to occur, leading to a high luminance range. Automotive and road lighting sources may appear in the field of view (with luminance values up to 25 × 109 cd/m2 with xenon automotive lighting), while dark areas at night are in the mesopic range (below 3 cd/m2), and may be in the scotopic range in some areas (below 0.005 cd/m2), where the human visual system behavior and performance are different from what happens in the daylight photopic range (CIE, 1951, 2010).
Evidence show that visual performance lowers for driving at night (Owens and Sivak, 1996; Wood et al., 2005), and some measures of that performance (labeled as focal vision) are more impaired than other (ambient vision) (Owens and Tyrrell, 1999). Moreover, the usual low dynamic range (LDR) display devices cannot display scotopic and low mesopic luminance values, nor glaring lights. Thus, one would expect nighttime driving simulation to carefully consider illumination issues and glare, and take advantage of HDR rendering and display. But this is not what happens. For instance, it is striking that in their review of perceptual issues in driving simulations, Kemeny and Panerai (2003) did not even mention driving at night as an issue.
Considering the number of driving simulation studies in the last 15 years, the number of published studies which include a night driving condition is unexpectedly small, and to the best of our knowledge, only around 30 studies have done so (see also Wood and Chaparro, 2011). Moreover, available nighttime simulations almost never provide any information about the technical settings or performance in nighttime conditions. For instance, the Material and methods section of articles might not even mention night driving (Panerai et al., 2001) or will offhandedly state that “the main part of the evaluation consisted of eight spells of driving, featuring different combinations of lighting condition (day/night)” (Alexander et al., 2002). Interestingly, most of these studies have been published in medical and biological science journals (Gillberg et al., 1996; Banks et al., 2004; Campagne et al., 2004; Contardi et al., 2004; Pizza et al., 2004; Åkerstedt et al., 2005; Silber et al., 2005; Schallhorn et al., 2009), and address hypovigilance and drug use issues. In a few articles, the lack of information about night driving settings is mitigated by a figure showing the visual appearance of the night driving simulation (Konstantopoulos et al., 2010; Schallhorn et al., 2009); this somehow enforces the feeling that the experimenters have a low level of control over illumination issues. This also appears in a series of driving simulation experiments at night, where the ambient luminance is controlled by neutral density filters (Alferdink, 2006; Bullough and Rea, 2000) or goggles (Brooks et al., 2005) in daylight simulated scenes (the “day for night” cinematographic technique) at the cost of unrealistic visual environments.
This lack of reported technical or photometric details also occurs with fog. For instance, in an important driving simulation study by Snowden (1998), showing that speed perception is altered in fog, little information was provided about the simulated fog. Indeed, in most articles reporting driving simulator studies in fog (Saffarian et al., 2012), no information is given about the fog density; no technical information is provided either, and one is left to guess that the simulator used OpenGL fog — that is, a contrast attenuation associated with the object’s distance. This means that a minimal model of fog is deemed acceptable, as we have seen for night simulation; it is possible with OpenGL to fit the physical law of contrast attenuation in fog (Koschmieder’s law; see Middleton, 1952); however, no information is given in this respect in the articles cited above. For instance, Broughton et al. (2007) compared the driver’s behavior in three visibility conditions: two fog densities are compared with a no-fog condition. The fog conditions are described in terms of a “visibility limit,” which probably means that the authors used a nonphysical tuning of the OpenGL fog. Moreover, simulation of artificial lighting (automotive lighting, road lighting, etc.) with OpenGL is rather complex (Lecocq et al., 2001).
It is common knowledge that vision is the main sensory channel to collect information during driving (Allen et al., 1971; Sivak, 1996), and in a driving simulator, CG images are supposed to provide the driver’s visual information.
The link between the displayed images and driving behavior is not direct; it is mediated by notions from vision science, such as luminance and contrast (Andersen, 2011), the visibility level (Adrian, 1989; Brémond et al., 2010b), the adaptation luminance (Adrian, 1987a; Ferwerda et al., 1996), motion, distance, and speed perception (Snowden, 1998; Cavallo et al., 2002; Caro et al., 2009), scotopic and mesopic vision (Gegenfurtner et al., 1999), glare (Spencer et al., 1995), and the visual saliency of the relevant/irrelevant objects in a scene (Brémond et al., 2010a), such as road markings (Horberry et al., 2006) and advertising. These visual factors first impact the visual performance and then the driving behavior. These are photometry-based concepts, and were not controlled for in the above-cited driving simulator studies.
For all these issues, photometric control of the images is mandatory. In some cases, an HDR display may be needed, or alternatively, TMOs may help to minimize the gap between ideal and displayed visual information. For instance, road lighting design needs some criterion, and the visibility level has been proposed as the visibility for the driver of a reference target on the road (Adrian, 1987b). The American standard includes this concept in the small target visibility assessment of road lighting (IESNA, 2000), and the French standard also includes this visibility level index (AFE, 2002).
To assess an operator’s quality, one needs a quality criterion. This is not so easy, and for instance, the correlation is weak among visual appearance, visual performance, and visual saliency in an image (Brémond et al., 2010b), so a choice is needed. In previous evaluations, visual appearance was considered first in most benchmarks (Eilertsen et al., 2013).
While visibility is considered by practitioners as a key perceptual issue in night driving, is it possible to preserve the visibility level of objects with a TMO? Some authors have proposed operators in order to control some kind of visual performance (Ward, 1994; Pattanaik et al., 1998), and Grave and Brémond (2008) proposed an operator focusing on preserving the visibility level. For dynamic situations, Petit and Brémond (2010) proposed a TMO preserving visibility, based on the work of Irawan et al. (2005) and Pattanaik et al. (2000). So, some efforts have been made to design TMOs shaped by visual performance constraints. On the other hand, the main effort in TMO design has been devoted so far to appearance criteria, such as color appearance and lightness perception, rather than visual performance criteria.
HDR issues have a specific flavor in CG. The split between image computation and image display is also relevant, but the problems are not the same. First, use of HDR virtual sensors for HDR image computation is now possible, because graphics processing units can manage float values. The main constraint is to run in real time, rather than sensor design or noise issues. Indeed, it is possible with pixel shaders to allocate some sensitivity to the virtual sensors (ie, compute the CG images in float units), even if the image computation does not simulate light propagation in the virtual scene in physical units.
The situation is quite different for HDR image display. HDR display devices are now available (Seetzen et al. (2004) demonstrated a prototype of an HDR display device at SIGGRAPH in 2004; see Part IV of this book for an update on HDR display). Commercial HDR display devices based on Seetzen’s ideas are now available (first from Brightside, now from Dolby). But this technology is still very expensive compared with conventional displays, and as a result, HDR display devices are very rare in human factors laboratories.
An important issue for driving simulators is that a large field of view is often needed, which is almost impossible to address with existing HDR display devices. For instance, most low-cost driving simulators use three displays, and in many driving situations, a field of view of 150° is needed (eg, if you have to cross an intersection). Virtual reality helmets can be viewed as an alternative as far as the field of view is concerned, but HDR displays are not available for these devices at the moment.
So the eight-bit frontier is still hard to cross for the driving simulator’s display, and the CG pipeline, which is expected to link the rendering part to the display part of the loop, tends to use TMOs in order to overcome the lack of HDR displays. As we have mentioned, in the case of visual performance preservation, this led to a number of TMOs (see Reinhard et al., 2005), followed by some concerns about the evaluation of these operators.
What would be a good design criterion for a TMO dedicated to driving simulation? Real time is mandatory. Second, temporal fluidity is needed, in order to avoid rapid changes and oscillations in the visual adaptation level, which may be due to a light source appearing in (or leaving) the field of view (Petit et al., 2013). This can be done by simulation of the time course of visual adaptation (Pattanaik et al., 2000). Third, as we have emphasized already, fidelity in terms of visual performance (rather than visual appearance) is relevant in most driving simulation applications, because the main goal of driving simulation experiments is the study of driver behavior, which in turn depends on the visual cues the driver finds in his or her environment.
Maybe the reader has so far found this chapter a bit pessimistic about the success of an HDR approach in driving simulation. The picture should be mitigated, however, and it is worth mentioning some articles where the photometric tuning of night driving simulations is taken seriously. The first one, to our knowledge, is from Mortimer (1963), with a very special driving simulator; however: at that time, in 1963, there were no personal computers available, and the simulator was purely electromechanical. In the computer era, Featherstone et al. (1999) conducted a field study, collecting reference data about car’s headlights at night, and tuned the rendering of the simulator in terms of contrast, color, and luminance.1 Kemeny and his team (Dubrovin et al., 2000; Panerai et al., 2006), at Renault Virtual Reality Centre, conducted several studies focusing on the simulation of automotive lighting, based on a simulation of light propagation, projecting lightmaps from the headlamps to the road surface (see also Weber and Plattfaut, 2001). Horberry et al. (2006) and Brémond et al. (2013) attempted to control the luminance map of the rendered images of night driving, which makes sense because their articles address road marking and road hazard visibility, respectively.
At night, glare is a key issue, as the usual displays cannot produce a glare sensation. To overcome this problem, Spencer et al. (1995) proposed a biologically inspired algorithm which simulates the effects of glare on vision (bloom, flare lines, lenticular halo) in CG images. Some technical solutions have also been proposed to simulate fog with a control on the luminance map, with a display calibration and a physical OpenGL tuning (Cavallo et al., 2002; Espié et al., 2002), and the simulation of halos around light sources (Lecocq et al., 2002; Dumont et al., 2004). The main issue with fog is contrast attenuation, rather than luminance values.
In addition, many driving simulator developments have not been published, because they are conducted by industrial firms which do not want to make their internal development public. They open discuss, however, HDR issues, and HDR rendering is mentioned in the technical documents of some driving simulation software. For example, SCANeR HEADLIGHT Interactive Simulation (OKtal, 2015) supports HDR rendering for realistic night driving experiments. Note that the OKtal software is widely used by automobile companies in France (eg, Renault, Valeo, PSA). In Germany, VIRES also supports HDR rendering (VIRES Virtual Test Drive software) (Vires, 2015). HDR rendering is also mentioned among OpenDS features (Math et al., 2013; OpenDS, 2015), a recently developed open-source driving simulator, which originated from the European Union Seventh Framework Programme project GetHomeSafe (GetHomeSafe, 2015). Some details of these technical developments are sometimes published, as in the case of Pro-SiVICTM, software developed by CIVITEC, where HDR textures are used in the sensor simulation for prototyping of advanced driver assistance systems (Gruyer et al., 2012). Optis is also active with regard to HDR issues, with SPEOS and Virtual Reality Lab (Optis, 2015).
The direct use of an HDR display device in driving simulations is still in its infancy. Shahar and Brémond (2014) were the first to use a true HDR display device (47-inch Dolby/SIM2 Solar), under photometric control, to conduct a driving simulation where night driving behaviors with and without LED road studs were compared. The automotive lighting, road lighting, and LED road stud beam pattern were tuned to realistic values, with use of direct photometric measurements of the road surface, the road markings, and the LED themselves on the screen.
The main issues was to run in real time, with three screens (1920 × 1080 pixels). The geometric configuration of the simulator was chosen in such a way that when the road studs were switched on, they were very likely to appear in the central screen, so it was decided to run the simulation with one HDR display device in front of the driver, and an LDR display device on each side of the driver. The main purpose of these lateral screens was to give the driver some sense of his/her own speed.
Several technical challenges needed to be addressed, among them the number of light sources and the lighting simulation itself (Dang et al., 2014). The IFSTTAR visual loop, developed under Open Scene Graph, supports two HDR renderings: one originated from a TMO proposed by Petit and Brémond (2010) and the other was adapted from a TMO proposed by Reinhard et al. (2002). Since thousands of road studs were required in this study, a particular way of controlling the light sources was adopted to guarantee a high frame rate, a key issue in real-time simulations. Each light source was simulated on the basis of the photometric characteristics of real LED road studs, as measured in the IFSTTAR photometry laboratory; their intensity was made dynamically controllable during the simulation. The LED road studs were divided into groups, each of them being controlled by a virtual group manager. This organization was particularly useful in this study, because the road studs were turned on/off automatically by these group managers depending on the vehicle’s position. Another challenge was the simulation of realistic night driving conditions with high luminance range, with bright areas due to the studs, the road lighting, and the headlamps of incoming vehicles, while very dark areas were also needed in the nighttime countryside landscape.
This experiment used an HDR display device, but with eight-bit (LDR) input images from the driving simulation visual loop. Thus, the benefit of the HDR display was the luminance dynamics, not the luminance sensitivity. The next challenge will be to develop a full HDR driving simulator visual loop, and feed an HDR display device with HDR videos.
This rapid overview of the potential benefits of HDR rendering and display for driving simulations leads to a balanced conclusion. On the one hand, dynamic range is marginally addressed in current driving simulation studies. The main reason is that the expected benefits of HDR are associated with low-level vision issues. Although they are known to have an impact on the driver’s behavior, this impact is limited to some specific situations, such as night and fog driving, or drivers with poor vision.
Whereas a smart tuning allows some qualitative control of the visual appearance in virtual environments, in quantitative behavioral studies there is a need for some photometric control of the displayed images. This includes physical, optical, and photometric data on light sources, participating media, and surfaces.
Of course, another reason for the limited interest in HDR in the driving simulation community is the cost. A photometric description is important for HDR imaging, but at some cost: photometric description of the virtual environments, need for photometric data for surfaces and light sources, real-time issues, etc. Further, HDR display devices are still very expensive, difficult to use with current video output formats, and seldom required, even in nighttime driving simulations to be published in peer-review journals. Also, most of the tone mapping literature (both on TMOs and on TMO evaluation) focuses on appearance criteria (eg, subjective fidelity) not on performance criteria (visibility, reaction time, etc.), which would be needed for driving simulations in low-visibility conditions.
But there is another side to this story, and we have tried to show that in some important cases, the control of image luminance map allows us to expect a much better fidelity between virtual reality and actual driving in terms of “Where do we look?” and “What do we see?” This, in turn, is known to impact driving behavior. Thus, HDR imaging, rendering, and display open the way to new driving simulation applications, to situations where the external validity was poor in previous studies: photometric parameters were known to impact the behavior, but they were not controlled.
This is why we plan to conduct experiments soon to assess the influence of an HDR display device on psychovisual and driving tasks. To that purpose, the impact on specific perceptive mechanisms underlying the driving behavior on a driving simulator will be compared, first for an LDR display (ie, HDR imaging followed by a TMO) then for an HDR display.
For instance, the contribution of an HDR display device on speed perception will be assessed in a driving context. We can do this by estimating the “time to collision,” using moving stimuli on both kinds of display devices. As the perceived speed is expected to depend on the luminance contrast (Stone and Thompson, 1992), more specifically, at low contrast or at night, the perceived speed of an object is underestimated (Blakemore and Snowden, 1999; Gegenfurtner et al., 1999). Thus, it can be assumed that this bias is closer to the real one with an HDR display device compared with an LDR display device. Therefore, use of an HDR display device may allow the investigation of reduced-visibility situations such as nighttime driving or glare. More broadly, a benefit is expected when driving simulation studies are conducted on an HDR display device when speed perception is a key figure (either the driver’s own speed or the speed of other vehicles), especially in reduced-visibility situations.
New lighting systems (signaling, lighting, headlamp beams, stop lights) may also benefit from HDR driving simulations. This was done, for instance, to study the impact of a motorcycle’s lighting design on its perceived speed (Cavallo et al., 2013), as well as for the driver’s behavior when the driver was facing a new concept of dynamic signaling with LED road studs (Shahar and Brémond, 2014).