Chapter 21

The Dynamic Range of Driving Simulation

R. Brémond; N.-T. Dang; C. Villa    Paris Est University, Marne-la-Vallée, France

Abstract

Driving simulation could benefit from high dynamic range computer graphics images, as well as high dynamic range display devices. In this chapter, we review the potential benefits of this technology for various uses of driving simulators, and discuss the obstacles which make it difficult today to promote these technologies in driving simulation applications. Benefits are first expected in behavioral studies, where a close link between the simulated and the real visual experience improves the validity of the driving simulation experiments. This is the case, for instance, in night driving simulation, where very dark areas need to be displayed together with very bright ones.

Keywords

Driving simulation; Visual perception; Visual performance; High dynamic range display; Tone mapping; Computer graphics

21.1 Introduction

Driving simulation has become a quite common virtual reality tool, addressing various fields, such as video games, vehicle design, driving lessons, and behavioral studies (Fisher et al., 2011). As in other fields of virtual reality, driving simulator providers have developed a number of visual effects in order to render a variety of environmental situations. For instance, night driving, rain, fog, and glare can be simulated with the use of state-of-the-art techniques from the computer graphics (CG) literature to mimic either a physical phenomenon (eg, beam pattern in automotive lighting) or its effect on driver perception (eg, fog), and to minimize the perceptual gap between the displayed image and the computed image with tone mapping operators (TMOs) (Reinhard et al., 2005). According to Andersen (2011), some visual factors are critical for the external validity of driving simulations (ie, validity with respect to actual driving). He emphasizes luminance and contrast as the most important visual factors for the main critical driving situations: night driving, and driving in rain, fog, and complex urban environments.

In this context, high dynamic range (HDR) rendering and display may improve the realism of a number of visual cues relevant to driving. However, the implementation of CG algorithms is not straightforward, and a trade-off is needed among cost (financial and computational), performance, and market (or user) demand. Moreover, in many driving situations, the driver’s visibility is good, and looking at the relevant visual cues is easy, both in real and in virtual environments. In these situations, the benefit of HDR images is often seen as too low, considering the associated costs and constraints.

In this chapter, we discuss to what extent HDR rendering and display has improved, or can improve driving simulators, and why HDR has not invaded the field yet. We will focus on driving simulation as a tool for behavioral studies; automotive design is considered in another chapter of this book.

Section 21.2 provides some evidence that HDR issues are not considered in most current driving simulator studies, even in vision-demanding situations. Then, we argue (Section 21.3) that some low-level visual factors, relevant to driving, are photometry dependent, and thus should benefit from a real-time HDR imaging and display system. After a short discussion in Section 21.4 of HDR rendering and display issues in CG, Section 21.5 reviews the few existing driving simulation studies where photometric issues and HDR components have been considered, and discusses what kind of realism is now available, or will be soon, in terms of realism, whether physical, perceptual, or functional (Ferwerda and Pattanaik, 1997), and what for. This includes a small number of simulations with a true HDR display device. We conclude (Section 21.6) by discussing the reasons why HDR has not yet invaded the field of driving simulation: it is interesting, in our opinion, to better understand the obstacles in order to push the development of HDR rendering and display with better efficiency. We also present some technical and experimental perspectives, toward a more intensive development of HDR video for driving simulations and vision science issues relevant in driving situations.

21.2 No Need for HDR Video in Driving Simulations?

One cannot say that driving simulator developers do not care about rendering issues. For instance, some level of realism may be important in video games (they ought to be the state of the art with respect to the video game market), and improves the driver’s sense of immersion. But perceptive realism is needed only when visual performance or visual appearance issues arise, such as at night, in low-visibility situations, or in complex situations where the visual saliency of objects in the scene may attract the driver’s visual attention (Hughes and Cole, 1986), and needs to be carefully simulated if one wants the driver’s visual behavior in the simulator to be similar to a real driver’s behavior.

This is the main point: perceptual realism is not a key issue in mainstream applications. For instance, in a recent review of the driver’s visual control of the vehicle’s lateral position (Lappi, 2014), low-visibility conditions are not even mentioned. Only a few people around the world are concerned with visual performance or visual appearance in a car: first, because it helps in the vehicle design (see Chapter 19); second, because it is needed for behavioral realism in driving situations where visual perception is a complex task (Brémond et al., 2014).

Night driving is a good example of a driving situation where both high and low luminance levels are expected to occur, leading to a high luminance range. Automotive and road lighting sources may appear in the field of view (with luminance values up to 25 × 109 cd/m2 with xenon automotive lighting), while dark areas at night are in the mesopic range (below 3 cd/m2), and may be in the scotopic range in some areas (below 0.005 cd/m2), where the human visual system behavior and performance are different from what happens in the daylight photopic range (CIE, 1951, 2010).

Evidence show that visual performance lowers for driving at night (Owens and Sivak, 1996; Wood et al., 2005), and some measures of that performance (labeled as focal vision) are more impaired than other (ambient vision) (Owens and Tyrrell, 1999). Moreover, the usual low dynamic range (LDR) display devices cannot display scotopic and low mesopic luminance values, nor glaring lights. Thus, one would expect nighttime driving simulation to carefully consider illumination issues and glare, and take advantage of HDR rendering and display. But this is not what happens. For instance, it is striking that in their review of perceptual issues in driving simulations, Kemeny and Panerai (2003) did not even mention driving at night as an issue.

Considering the number of driving simulation studies in the last 15 years, the number of published studies which include a night driving condition is unexpectedly small, and to the best of our knowledge, only around 30 studies have done so (see also Wood and Chaparro, 2011). Moreover, available nighttime simulations almost never provide any information about the technical settings or performance in nighttime conditions. For instance, the Material and methods section of articles might not even mention night driving (Panerai et al., 2001) or will offhandedly state that “the main part of the evaluation consisted of eight spells of driving, featuring different combinations of lighting condition (day/night)” (Alexander et al., 2002). Interestingly, most of these studies have been published in medical and biological science journals (Gillberg et al., 1996; Banks et al., 2004; Campagne et al., 2004; Contardi et al., 2004; Pizza et al., 2004; Åkerstedt et al., 2005; Silber et al., 2005; Schallhorn et al., 2009), and address hypovigilance and drug use issues. In a few articles, the lack of information about night driving settings is mitigated by a figure showing the visual appearance of the night driving simulation (Konstantopoulos et al., 2010; Schallhorn et al., 2009); this somehow enforces the feeling that the experimenters have a low level of control over illumination issues. This also appears in a series of driving simulation experiments at night, where the ambient luminance is controlled by neutral density filters (Alferdink, 2006; Bullough and Rea, 2000) or goggles (Brooks et al., 2005) in daylight simulated scenes (the “day for night” cinematographic technique) at the cost of unrealistic visual environments.

This lack of reported technical or photometric details also occurs with fog. For instance, in an important driving simulation study by Snowden (1998), showing that speed perception is altered in fog, little information was provided about the simulated fog. Indeed, in most articles reporting driving simulator studies in fog (Saffarian et al., 2012), no information is given about the fog density; no technical information is provided either, and one is left to guess that the simulator used OpenGL fog — that is, a contrast attenuation associated with the object’s distance. This means that a minimal model of fog is deemed acceptable, as we have seen for night simulation; it is possible with OpenGL to fit the physical law of contrast attenuation in fog (Koschmieder’s law; see Middleton, 1952); however, no information is given in this respect in the articles cited above. For instance, Broughton et al. (2007) compared the driver’s behavior in three visibility conditions: two fog densities are compared with a no-fog condition. The fog conditions are described in terms of a “visibility limit,” which probably means that the authors used a nonphysical tuning of the OpenGL fog. Moreover, simulation of artificial lighting (automotive lighting, road lighting, etc.) with OpenGL is rather complex (Lecocq et al., 2001).

21.3 Visual Factors Which Impact Driving Behavior

It is common knowledge that vision is the main sensory channel to collect information during driving (Allen et al., 1971; Sivak, 1996), and in a driving simulator, CG images are supposed to provide the driver’s visual information.

The link between the displayed images and driving behavior is not direct; it is mediated by notions from vision science, such as luminance and contrast (Andersen, 2011), the visibility level (Adrian, 1989; Brémond et al., 2010b), the adaptation luminance (Adrian, 1987a; Ferwerda et al., 1996), motion, distance, and speed perception (Snowden, 1998; Cavallo et al., 2002; Caro et al., 2009), scotopic and mesopic vision (Gegenfurtner et al., 1999), glare (Spencer et al., 1995), and the visual saliency of the relevant/irrelevant objects in a scene (Brémond et al., 2010a), such as road markings (Horberry et al., 2006) and advertising. These visual factors first impact the visual performance and then the driving behavior. These are photometry-based concepts, and were not controlled for in the above-cited driving simulator studies.

For all these issues, photometric control of the images is mandatory. In some cases, an HDR display may be needed, or alternatively, TMOs may help to minimize the gap between ideal and displayed visual information. For instance, road lighting design needs some criterion, and the visibility level has been proposed as the visibility for the driver of a reference target on the road (Adrian, 1987b). The American standard includes this concept in the small target visibility assessment of road lighting (IESNA, 2000), and the French standard also includes this visibility level index (AFE, 2002).

To assess an operator’s quality, one needs a quality criterion. This is not so easy, and for instance, the correlation is weak among visual appearance, visual performance, and visual saliency in an image (Brémond et al., 2010b), so a choice is needed. In previous evaluations, visual appearance was considered first in most benchmarks (Eilertsen et al., 2013).

While visibility is considered by practitioners as a key perceptual issue in night driving, is it possible to preserve the visibility level of objects with a TMO? Some authors have proposed operators in order to control some kind of visual performance (Ward, 1994; Pattanaik et al., 1998), and Grave and Brémond (2008) proposed an operator focusing on preserving the visibility level. For dynamic situations, Petit and Brémond (2010) proposed a TMO preserving visibility, based on the work of Irawan et al. (2005) and Pattanaik et al. (2000). So, some efforts have been made to design TMOs shaped by visual performance constraints. On the other hand, the main effort in TMO design has been devoted so far to appearance criteria, such as color appearance and lightness perception, rather than visual performance criteria.

21.4 HDR Rendering

HDR issues have a specific flavor in CG. The split between image computation and image display is also relevant, but the problems are not the same. First, use of HDR virtual sensors for HDR image computation is now possible, because graphics processing units can manage float values. The main constraint is to run in real time, rather than sensor design or noise issues. Indeed, it is possible with pixel shaders to allocate some sensitivity to the virtual sensors (ie, compute the CG images in float units), even if the image computation does not simulate light propagation in the virtual scene in physical units.

The situation is quite different for HDR image display. HDR display devices are now available (Seetzen et al. (2004) demonstrated a prototype of an HDR display device at SIGGRAPH in 2004; see Part IV of this book for an update on HDR display). Commercial HDR display devices based on Seetzen’s ideas are now available (first from Brightside, now from Dolby). But this technology is still very expensive compared with conventional displays, and as a result, HDR display devices are very rare in human factors laboratories.

An important issue for driving simulators is that a large field of view is often needed, which is almost impossible to address with existing HDR display devices. For instance, most low-cost driving simulators use three displays, and in many driving situations, a field of view of 150° is needed (eg, if you have to cross an intersection). Virtual reality helmets can be viewed as an alternative as far as the field of view is concerned, but HDR displays are not available for these devices at the moment.

So the eight-bit frontier is still hard to cross for the driving simulator’s display, and the CG pipeline, which is expected to link the rendering part to the display part of the loop, tends to use TMOs in order to overcome the lack of HDR displays. As we have mentioned, in the case of visual performance preservation, this led to a number of TMOs (see Reinhard et al., 2005), followed by some concerns about the evaluation of these operators.

What would be a good design criterion for a TMO dedicated to driving simulation? Real time is mandatory. Second, temporal fluidity is needed, in order to avoid rapid changes and oscillations in the visual adaptation level, which may be due to a light source appearing in (or leaving) the field of view (Petit et al., 2013). This can be done by simulation of the time course of visual adaptation (Pattanaik et al., 2000). Third, as we have emphasized already, fidelity in terms of visual performance (rather than visual appearance) is relevant in most driving simulation applications, because the main goal of driving simulation experiments is the study of driver behavior, which in turn depends on the visual cues the driver finds in his or her environment.

21.5 Photometric Control of CG Images in Driving Simulations

Maybe the reader has so far found this chapter a bit pessimistic about the success of an HDR approach in driving simulation. The picture should be mitigated, however, and it is worth mentioning some articles where the photometric tuning of night driving simulations is taken seriously. The first one, to our knowledge, is from Mortimer (1963), with a very special driving simulator; however: at that time, in 1963, there were no personal computers available, and the simulator was purely electromechanical. In the computer era, Featherstone et al. (1999) conducted a field study, collecting reference data about car’s headlights at night, and tuned the rendering of the simulator in terms of contrast, color, and luminance.1 Kemeny and his team (Dubrovin et al., 2000; Panerai et al., 2006), at Renault Virtual Reality Centre, conducted several studies focusing on the simulation of automotive lighting, based on a simulation of light propagation, projecting lightmaps from the headlamps to the road surface (see also Weber and Plattfaut, 2001). Horberry et al. (2006) and Brémond et al. (2013) attempted to control the luminance map of the rendered images of night driving, which makes sense because their articles address road marking and road hazard visibility, respectively.

At night, glare is a key issue, as the usual displays cannot produce a glare sensation. To overcome this problem, Spencer et al. (1995) proposed a biologically inspired algorithm which simulates the effects of glare on vision (bloom, flare lines, lenticular halo) in CG images. Some technical solutions have also been proposed to simulate fog with a control on the luminance map, with a display calibration and a physical OpenGL tuning (Cavallo et al., 2002; Espié et al., 2002), and the simulation of halos around light sources (Lecocq et al., 2002; Dumont et al., 2004). The main issue with fog is contrast attenuation, rather than luminance values.

In addition, many driving simulator developments have not been published, because they are conducted by industrial firms which do not want to make their internal development public. They open discuss, however, HDR issues, and HDR rendering is mentioned in the technical documents of some driving simulation software. For example, SCANeR HEADLIGHT Interactive Simulation (OKtal, 2015) supports HDR rendering for realistic night driving experiments. Note that the OKtal software is widely used by automobile companies in France (eg, Renault, Valeo, PSA). In Germany, VIRES also supports HDR rendering (VIRES Virtual Test Drive software) (Vires, 2015). HDR rendering is also mentioned among OpenDS features (Math et al., 2013; OpenDS, 2015), a recently developed open-source driving simulator, which originated from the European Union Seventh Framework Programme project GetHomeSafe (GetHomeSafe, 2015). Some details of these technical developments are sometimes published, as in the case of Pro-SiVICTM, software developed by CIVITEC, where HDR textures are used in the sensor simulation for prototyping of advanced driver assistance systems (Gruyer et al., 2012). Optis is also active with regard to HDR issues, with SPEOS and Virtual Reality Lab (Optis, 2015).

The direct use of an HDR display device in driving simulations is still in its infancy. Shahar and Brémond (2014) were the first to use a true HDR display device (47-inch Dolby/SIM2 Solar), under photometric control, to conduct a driving simulation where night driving behaviors with and without LED road studs were compared. The automotive lighting, road lighting, and LED road stud beam pattern were tuned to realistic values, with use of direct photometric measurements of the road surface, the road markings, and the LED themselves on the screen.

The main issues was to run in real time, with three screens (1920 × 1080 pixels). The geometric configuration of the simulator was chosen in such a way that when the road studs were switched on, they were very likely to appear in the central screen, so it was decided to run the simulation with one HDR display device in front of the driver, and an LDR display device on each side of the driver. The main purpose of these lateral screens was to give the driver some sense of his/her own speed.

Several technical challenges needed to be addressed, among them the number of light sources and the lighting simulation itself (Dang et al., 2014). The IFSTTAR visual loop, developed under Open Scene Graph, supports two HDR renderings: one originated from a TMO proposed by Petit and Brémond (2010) and the other was adapted from a TMO proposed by Reinhard et al. (2002). Since thousands of road studs were required in this study, a particular way of controlling the light sources was adopted to guarantee a high frame rate, a key issue in real-time simulations. Each light source was simulated on the basis of the photometric characteristics of real LED road studs, as measured in the IFSTTAR photometry laboratory; their intensity was made dynamically controllable during the simulation. The LED road studs were divided into groups, each of them being controlled by a virtual group manager. This organization was particularly useful in this study, because the road studs were turned on/off automatically by these group managers depending on the vehicle’s position. Another challenge was the simulation of realistic night driving conditions with high luminance range, with bright areas due to the studs, the road lighting, and the headlamps of incoming vehicles, while very dark areas were also needed in the nighttime countryside landscape.

This experiment used an HDR display device, but with eight-bit (LDR) input images from the driving simulation visual loop. Thus, the benefit of the HDR display was the luminance dynamics, not the luminance sensitivity. The next challenge will be to develop a full HDR driving simulator visual loop, and feed an HDR display device with HDR videos.

21.6 Conclusion

This rapid overview of the potential benefits of HDR rendering and display for driving simulations leads to a balanced conclusion. On the one hand, dynamic range is marginally addressed in current driving simulation studies. The main reason is that the expected benefits of HDR are associated with low-level vision issues. Although they are known to have an impact on the driver’s behavior, this impact is limited to some specific situations, such as night and fog driving, or drivers with poor vision.

Whereas a smart tuning allows some qualitative control of the visual appearance in virtual environments, in quantitative behavioral studies there is a need for some photometric control of the displayed images. This includes physical, optical, and photometric data on light sources, participating media, and surfaces.

Of course, another reason for the limited interest in HDR in the driving simulation community is the cost. A photometric description is important for HDR imaging, but at some cost: photometric description of the virtual environments, need for photometric data for surfaces and light sources, real-time issues, etc. Further, HDR display devices are still very expensive, difficult to use with current video output formats, and seldom required, even in nighttime driving simulations to be published in peer-review journals. Also, most of the tone mapping literature (both on TMOs and on TMO evaluation) focuses on appearance criteria (eg, subjective fidelity) not on performance criteria (visibility, reaction time, etc.), which would be needed for driving simulations in low-visibility conditions.

But there is another side to this story, and we have tried to show that in some important cases, the control of image luminance map allows us to expect a much better fidelity between virtual reality and actual driving in terms of “Where do we look?” and “What do we see?” This, in turn, is known to impact driving behavior. Thus, HDR imaging, rendering, and display open the way to new driving simulation applications, to situations where the external validity was poor in previous studies: photometric parameters were known to impact the behavior, but they were not controlled.

This is why we plan to conduct experiments soon to assess the influence of an HDR display device on psychovisual and driving tasks. To that purpose, the impact on specific perceptive mechanisms underlying the driving behavior on a driving simulator will be compared, first for an LDR display (ie, HDR imaging followed by a TMO) then for an HDR display.

For instance, the contribution of an HDR display device on speed perception will be assessed in a driving context. We can do this by estimating the “time to collision,” using moving stimuli on both kinds of display devices. As the perceived speed is expected to depend on the luminance contrast (Stone and Thompson, 1992), more specifically, at low contrast or at night, the perceived speed of an object is underestimated (Blakemore and Snowden, 1999; Gegenfurtner et al., 1999). Thus, it can be assumed that this bias is closer to the real one with an HDR display device compared with an LDR display device. Therefore, use of an HDR display device may allow the investigation of reduced-visibility situations such as nighttime driving or glare. More broadly, a benefit is expected when driving simulation studies are conducted on an HDR display device when speed perception is a key figure (either the driver’s own speed or the speed of other vehicles), especially in reduced-visibility situations.

New lighting systems (signaling, lighting, headlamp beams, stop lights) may also benefit from HDR driving simulations. This was done, for instance, to study the impact of a motorcycle’s lighting design on its perceived speed (Cavallo et al., 2013), as well as for the driver’s behavior when the driver was facing a new concept of dynamic signaling with LED road studs (Shahar and Brémond, 2014).

References

Adrian W. Adaptation luminance when approaching a tunnel in daytime. Light. Res. Technol. 1987a;19(3):73–79.

Adrian W. Visibility level under night-time driving conditions. J. Illum. Eng. Soc. 1987b;16(2):3–12.

Adrian W. Visibility of targets: model for calculation. Light. Res. Technol. 1989;21(4):181–188.

AFE. Recommandations relatives à l’éclairage des voies publiques. Paris: Association Française de l’Eclairage; 2002.

Åkerstedt et al., 2005 Åkerstedt T., Peters B., Anund A., Kecklund G. Impaired alertness and performance driving home from the night shift: a driving simulator study. J. Sleep Res. 2005;14(1):17–20.

Alexander J., Barham P., Black I. Factors influencing the probability of an incident at a junction: results from an interactive driving simulator. Acc. Anal. Prevent. 2002;34:779–792.

Alferdink J.W. Target detection and driving behavior measurements in a driving simulator at mesopic light levels. Ophtalmic Physiol. Opt. 2006;26:264–280.

Allen T.M., Lunenfeld H., Alexander G.J. Driver information needs. Highway Res. Board. 1971;36:102–115.

Andersen G.J. Sensory and perceptual factors in the design of driving simulation displays. In: Fisher D.L., Rizzo M., Caird J.K., Lee J.D., eds. Driving Simulation for Engineering, Medicine and Psychology. Boca Raton, FL: CRC Press; 2011:1–11.

Banks S., Catcheside P., Lack L., Grunstein R., McEvoy D. Low levels of alcohol impair driving simulator performance and reduce perception of crash risk in partially sleep deprived subjects. Sleep. 2004;6(27):1064–1067.

Blakemore M.R., Snowden R.J. The effect of contrast upon perceived speed: a general phenomenon? Perception. 1999;28(1):33–48.

Brémond R., Petit J., Tarel J.P. Saliency maps for high dynamic range images. In: Media Retargetting Workshop in Proc. ECCV 2010, Crete, Greece. 2010a.

Brémond R., Tarel J.P., Dumont E., Hautière N. Vision models for image quality assessment: one is not enough. J. Electron. Imaging. 2010b;19(4):04304.

Brémond R., Bodard V., Dumont E., Nouailles-Mayeur A. Target visibility level and detection distance on a driving simulator. Light. Res. Technol. 2013;45:76–89.

Brémond R., Auberlet J.M., Cavallo V., Désiré L., Faure V., Lemonnier S., Lobjois R., Tarel J.P. Where we look when we drive: a multidisciplinary approach. In: Proc. Transportation Research Arena, Paris, France. 2014.

Brooks J.O., Tyrrell R.A., Frack T.A. The effect of severe visual challenges on steering performance in visually healthy young drivers. Optom. Vis. Sci. 2005;82(8):689–697.

Broughton K.L., Switzer F., Scott D. Car following decisions under three visibility conditions and two speeds tested with a driving simulator. Acc. Anal. Prevent. 2007;39(1):106–116.

Bullough J.D., Rea M.R. Simulated driving performance and peripheral detection at mesopic and low photopic light levels. Light. Res. Technol. 2000;32(4):194–198.

Campagne A., Pebayle T., Muzet A. Correlation between driving errors and vigilance level: influence of the driver’s age. Physiol. Behav. 2004;80:515–524.

Caro S., Cavallo V., Marendaz C., Boer E., Vienne F. Can headway reduction in fog be explained by impaired perception of relative motion? Hum. Factors. 2009;51(3):378–392.

Cavallo V., Dumont E., Galleé G. Experimental validation of extended fog simulation techniques. In: Proc. Driving Simulation Conference, Paris, France. 2002:329–340.

Cavallo V., Ranchet M., Pinto M., Espié S., Vienne F., Dang N.T. Improving car drivers’ perception of motorcyclists through innovative headlight configurations. In: Proc. 10th International Symposium on Automotive Lighting (ISAL), TU Darmstadt, Germany. 2013.

CIE. The standard scotopic sensitivity function. In: Proc. of the Commission Internationale de l’Eclairage, Paris. 37. 1951;1 (4).

CIE. Recommended system for mesopic photometry based on visual performance. Technical report. Commission Internationale de l’Eclairage; 2010.

Contardi S., Pizza F., Sancisi E., Mondini S., Cirignotta F. Reliability of a driving simulation task for evaluation of sleepiness. Brain Res. Bull. 2004;63(5):427–431.

Dang N.T., Vienne F., Brémond R. HDR simulation of intelligent LED road studs. In: Proc. Driving Simulation Conference, Paris (France). 2014:41.1–41-2.

Dubrovin A., Lelevé J., Prevost A., Canry M., Cherfan S., Lecocq P., Kelada J.M., Kemeny A. Application of real-time lighting simulation for intelligent front-lighting studies. In: Proc. Driving Simulation Conference, Paris (France). 2000.

Dumont E., Paulmier G., Lecocq P., Kemeny A. Computational and experimental assessment of real-time front-lighting simulation in night-time fog. In: Proc. Driving Simulation Conference, Paris (France). 2004:197–208.

Eilertsen G., Wanat R., Mantiuk R.K., Unger J. Evaluation of tone mapping operators for HDR-video. Comput. Graph. Forum. 2013;32(7):275–284.

Espié S., Moessinger M., Vienne F., Pébayle T., Follin E., Gallée G. Real-time rendering of traffic reduced visibility situations: development and qualitative evaluation. In: Proc. Driving Simulation Conference, Paris (France). 2002:89–98.

Featherstone K., Bloomfield J., Lang A., Miller-Meeks M., Woodworth G., Steinert R. Driving simulation study: bilateral array multifocal versus bilateral AMO monofocal intraocular lenses. J. Cataract Refract. Surg. 1999;25(9):1254–1262.

Ferwerda J.A., Pattanaik S.N. A model of contrast masking for computer graphics. In: Proceedings of ACM SIGGRAPH. 1997:143–152.

Ferwerda J.A., Pattanaik S.N., Shirley P., Greenberg D.P. A model of visual adaptation for realistic image synthesis. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96. New York, NY, USA: ACM; 1996:249–258.

Fisher D.L., Rizzo M., Caird J.K., Lee J.D. Driving Simulation for Engineering, Medicine and Psychology. Boca Raton, FL: CRC Press; 2011.

Gegenfurtner K.R., Mayser H., Sharpe L.T. Seeing movement in the dark. Nature. 1999;398(6727):475–476.

GetHomeSafe. 2015. http://www.gethomesafe-fp7.eu.

Gillberg M., Kecklund G., Åkerstedt T. Sleepiness and performance of professional drivers in a truck simulator — comparisons between day and night driving. J. Sleep Res. 1996;5(1):12–15.

Grave J., Brémond R. A tone mapping algorithm for road visibility experiments. ACM Trans. Appl. Percept. 2008;5(2):Article 12.

Gruyer D., Grapinet M., De Souza P. Modeling and validation of a new generic virtual optical sensor for ADAS prototyping. In: Intelligent Vehicles Symposium. New York: IEEE; 2012:969–974.

Horberry T., Anderson J., Regan M. The possible safety benefits of enhanced road markings: a driving simulator evaluation. Transp. Res. Part F Traffic Psychol. Behav. 2006;9(1):77–87.

Hughes P.K., Cole B.L. What attracts attention when driving? Ergonomics. 1986;29(3):377–391.

IESNA. American National Standard Practice for Roadway Lighting. RP-8-00. New York: IESNA; 2000.

Irawan P., Ferwerda J.A., Marschner S.R. Perceptually based tone mapping of high dynamic range image streams. In: Proceedings of Eurographics Symposium on Rendering. 2005:231–242.

Kemeny A., Panerai F. Evaluating perception in driving simulation experiments. Trends Cogn. Sci. 2003;7(1):31–37.

Konstantopoulos P., Chapman P., Crundall D. Driver’s visual attention as a function of driving experience and visibility. Using a driving simulator to explore drivers’ eye movements in day, night and rain driving. Acc. Anal. Prevent. 2010;42:827–834.

Lappi O. Future path and tangent point models in the visual control of locomotion in curve driving. J. Vis. 2014;14(12):1–22.

Lecocq P., Michelin S., Arquès D., Kemeny A. Simulation temps-réel d’éclairage en présence de brouillard. Revue de CFAO et d’Informatique Graphique. 2001;16(1):51–66.

Lecocq P., Michelin S., Kemeny A., Arquès D. Lighting simulation with the presence of fog: a real time rendering solution for driving simulators. In: Proc. Driving Simulation Conference. 2002:65–74.

Math R., Mahr A., Moniri M.M., Muller C. OpenDS: a new open-source driving simulator for research. GMM-Fachbericht-AmE; 2013 Technical report.

Middleton W. Vision Through the Atmosphere. Toronto: University of Toronto Press; 1952.

Mortimer R. Effect of low blood-alcohol concentrations in simulated day and night driving. Percept. Motor Skills. 1963;17:399–408.

OKtal. 2015. http://www.scanersimulation.com/software/software-research/headlight-simulation.html.

OpenDS. 2015. http://opends.de.

Optis. 2015. http://portal.optis-world.com.

Owens D.A., Sivak M. Differentiation of visibility and alcohol as contributors of twilight road fatalities. Hum. Factors. 1996;38(4):680–689.

Owens D.A., Tyrrell R.A. Effects of luminance, blur and age on night-time visual guidance: a test of the selective degradation hypothesis. J. Exp. Psychol. Appl. 1999;5(2):115–128.

Panerai F., Droulez J., Kelada J.M., Kemeny A., Balligand E., Favre B. Speed and safety distance control in truck driving: comparison of simulation and real-world environment. In: Proc. Driving Simulation Conference, Sophia Antipolis (France). 2001.

Panerai F., Toffin D., Paillé D., Kemeny A., Fadel K. Eye movement patterns in nighttime driving simulation: conventional and swivelling headlights. In: Proc. VISION Conference. 2006.

Pattanaik S.N., Ferwerda J.A., Fairchild M.D., Greenberg D.P. A multiscale model of adaptation and spatial vision for realistic image display. In: Proceedings of SIGGRAPH. New York, NY, USA: ACM; 1998:287–298.

Pattanaik S.N., Tumblin J., Yee H., Greenberg D.P. Time-dependent visual adaptation for fast realistic image display. In: Proceedings of SIGGRAPH. New York, NY, USA: ACM Press; 2000:47–54.

Petit J., Brémond R. A high dynamic range rendering pipeline for interactive applications: in search for perceptual realism. Vis. Comput. 2010;28(6–8):533–542.

Petit J., Brémond R., Tom A. Evaluation of tone mapping operators in night-time virtual worlds. Virtual Reality. 2013;17:253–262.

Pizza F., Contardi S., Mostacci B., Mondini S., Cirignotta F. A driving simulation task: correlations with multiple sleep latency test. Brain Res. Bull. 2004;63:423–426.

Reinhard E., Stark M., Shirley P., Ferwerda J. Photographic tone reproduction for digital images. In: Proceedings of SIGGRAPH. 2002.

Reinhard E., Ward G., Pattanaik S.N., Debevec P. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. San Francisco, CA: Morgan Kaufmann; 2005.

Saffarian M., Happee R., de Winter J. Why do drivers maintain short headways in fog? A driving-simulator study evaluating feeling of risk and lateral control during automated and manual car following. Ergonomics. 2012;55(9):971–985.

Schallhorn S., Tanzer D., Kaupp S., Brown M., Malady S. Comparison of night driving performance after wavefront-guided and conventional LASIK for moderate myopia. Ophthalmology. 2009;116(4):702–709.

Seetzen H., Heidrich W., Stuerzlinger W., Ward G., Whitehead L., Trentacoste M., Ghosh A., Vorozcovs A. High dynamic range display systems. ACM Trans. Graph. 2004;23(3):760–768.

Shahar A., Brémond R. Toward smart active road studs for lane delineation. In: Proc. Transportation Research Arena, Paris la Défense (France). 2014.

Silber B., Papafotiou K., Croft R., Ogden E., Swann P., Stough C. The effects of dexamphetamine on simulated driving performance. Psychopharmacology. 2005;179:536–543.

Sivak M. The information that drivers use: is it indeed 90% visual? Perception. 1996;25:1081–1089.

Snowden R.J. Speed perception fogs up as visibility drops out. Nature. 1998;392:450.

Spencer G., Shirley P., Zimmerman K., Greenberg D.P. Physically based glare effect for digital images. In: Proceedings of ACM SIGGRAPH. 1995:325–334.

Stone L.S., Thompson P. Human speed perception is contrast dependent. Vis. Res. 1992;32(8):1535–1549.

Vires. 2015. http://www.vires.com.

Ward G. A contrast based scale-factor for image display. In: San Diego, CA: Academic Press Professional; 1994:415–421.

Weber T., Plattfaut C. Virtual night drive. In: Proceedings of the 17th International Technical Conference on the Enhanced Safety of Vehicles, Amsterdam, The Netherlands. 2001:1–5.

Wood J., Chaparro A. Night-driving: how low illumination affects driving and the challenge of simulation. In: Fisher D.L., Rizzo M., Caird J.K., Lee J.D., eds. Handbook of Driving Simulation for Engineering, Medicine and Psychology. Boca Raton, FL: CRC Press; 2011:1–12.

Wood J.M., Tyrrell M.A., Carberry T.P. Limitations in drivers’ ability to recognize pedestrians at night. Hum. Factors. 2005;47(3):644–654.


1 Considering the available display devices at that time, it is unlikely that they could tune the luminance range to glaring situations, as suggest in the article.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset