Chapter 6

Practical Backwards Compatible High Dynamic Range Compression

K. Debattista; T. Bashford Rogers; A. Chalmers    University of Warwick, Coventry, United Kingdom

Abstract

High dynamic range (HDR) imaging permits the capture, storage, display, and handling of real world lighting, removing the limitations that traditional imaging has that may lead to over- and underexposed pixels in images. In order to achieve this, HDR imagery requires the storage and manipulation of floating point data, which consumes more space than the single byte per color channel employed by traditional imaging approaches. HDR video and still image compression methods do exist and have helped improve the amount of compression applied. One particular set of such methods, the backwards compatible methods, are characterized by storing two streams, a base or backwards compatible stream that can be decoded by legacy viewers, and a secondary stream which enhances the base stream to restore the full HDR content. These methods can be quite complex and rely on a variety of tone mapping functions which transform HDR data to the dynamic range used by traditional viewers. The ability to chose tone mappers renders flexibility and user control to such methods. However, there are a significant number of tone mappers and there is no consensus on which ones are better and which settings to use for each one. This work presents a new backwards compatible HDR compression method which is practical because it produces good results compared to state-of-the-art methods, it is efficient, and is straightforward to implement.

Keywords

High dynamic range; Video compression; Image compression

Acknowledgments

Debattista and Chalmers are partially funded by Royal Society Industrial Fellowships.

1 Introduction

High dynamic range (HDR) imagery offers an alternative to the traditional method of handling image and video content, frequently termed low (or standard) dynamic range (LDR). While LDR suffers from limitations in the capture, storage, and display of content due to pixels that may be over- or underexposed, HDR is capable of capturing, representing, and manipulating the entire range of real world luminance. The advantages of real world luminance are many; primarily, it affords further realism from allowing manipulation of real world data for applications, such as relighting of virtual objects, comparisons to real world, advertising, etc. There is a price to pay for the advantages garnered by HDR. HDR content is represented by floating point data, unlike the single byte per color channel used for LDR content. This entails that at HD resolutions of 1920 × 1080, uncompressed HDR images require 24 MB of data and a video of 1 min at 30 frames per second would require around 42 GB of data. This is clearly impractical and a number of compression methods for images and video have been adopted to improve storage requirements for HDR content.

There are two broad types of HDR compression methods: those that are backwards compatible, typically consisting of two streams where one of the streams can be viewed on legacy LDR encoders, and those that are dedicated to HDR viewers only. The advantage of backwards compatible methods is that they provide an initial uptake for HDR because they do not require specialized displays or software to view the HDR content. On the other hand, dedicated methods can, potentially, make better use of the available bit rate. The backwards compatible methods require the use of a tone mapping operator [1]; a function that converts the luminance of the HDR content into LDR while attempting to preserve a perceptual match with the original representation. There are a wide variety of tone mappers and parameters required for each, and may require a certain amount of expertise to get the best out of them. After tone mapping, the backwards compatible methods will generate some form of residual or ratio image that is stored as a secondary stream to make up for the lost HDR data in the tone mapped content. This work presents an alternative to the traditional method by simplifying the tone mapping and residual aspect. A single solution is provided based on extracting the luminance channel and a base channel and storing these two. The proposed method, termed practical backwards compatible (pBC), proved successful in experiments conducted, is computationally fast, and does not require any detailed knowledge of tone mappers and parameter settings.

The following section presents the related work and Section 3 describes pBC. Section 4 presents results and analysis of this method against other backwards compatible methods, and Section 5 presents conclusions.

2 Related Work

The dynamic range of real-world lighting situations can vary approximately from 10−4 to 108 cd/m2 [2]. Capturing such a large amount of data requires using special data formats such as Radiance RGBE [3], OpenEXR RGBA [4], and LogLUV [5] supporting up to 96 bits/pixel (bpp), instead of the traditional 24 bpp integer formats, such as JPEG. Although, the mentioned formats are efficient ways to store real-world imaging data, they have a few drawbacks. First, most of these are floating point data formats and second, the higher bpp cannot be handled by existing video encoders. Therefore, for all practical purposes, uncompressed HDR video data stored in any of these formats cannot be used directly. In order to mitigate this issue, a number of HDR still and video compression methods have been proposed. This section provides a brief background on the different approaches to HDR video compression and briefly discusses some of the compression methods.

2.1 HDR Video Compression Methods

In this section, the focus is on HDR video compression methods apart from JPEG HDR, which is a still image method extendable to video. Both the video and still image version are used in the results section. Banterle et al. [1] provides a comprehensive overview of HDR still image, video, and texture compression. As mentioned earlier, HDR video compression methods can be broadly classified into two groups: higher bit-depth single stream compression methods and backwards compatible double stream methods.

Compression methods following the first approach produce a single raw video stream which allocates higher bit-depth, that is, bit-depth >8, typically 10–14 for the luma channel, and 8 bits for the two chroma channels. Using a range of reversible transfer functions to convert 16-bit floating point luminance to n-bit luma where n ∈ [10, 14], this approach takes advantage of encoders that are able to support up to 14 bits/pixel/channel. Compression methods following this approach include Perception Motivated HDR video encoding [6], which converts real-world luminance values to 11-bit luma, using perceptual encoding [6], Temporally Coherent Luminance to Luma conversion for HDR video proposed [7], which uses a temporally coherent extension of adaptive logLUV encoding [8] to convert real-world luminance to 12-bit luma, and HVS based HDR video compression proposed by Zhang et al. [9] which converts real-world luminance values to 14-bit luma values using a nonlinear approach similar to Lloyd Max quantization [10]. The computation of chroma values in all the abovementioned approaches are similar to chroma encoding mentioned in logLUV [5].

A few drawbacks of these approaches when used in conjunction with state-of-the-art encoders are that the compressed video files cannot be played back using legacy video players due to the lack of higher bit-depth support, the maximum value of the chroma information, although limited to 8 bits, needs to be passed to the encoder as 11, 12, and 14 bits, respectively because encoders expect similar bit-depths for both luma and chroma and that most existing hardware-based encoders and decoders are limited to 8 bits. Therefore, wide-scale adoption of these compression methods in practice seems unlikely in the near future.

The second group consists of compression methods which split the input HDR data into primary and secondary streams, typically allocating 8 bits/pixel/channel to each of the streams. The primary stream, in a backwards-compatible double stream method is typically a tone-mapped LDR stream, which enables it to be played back using any legacy video player. The secondary stream consists of additional information that can be used to reconstruct the HDR frames. Compression methods following this approach include JPEG HDR [11], HDR MPEG [12], and Rate Distortion Optimized HDR video encoding [13].

JPEG HDR extends the widely used 8 bit/pixel/channel JPEG image format which uses subsampled additional information with precorrection and postcorrection techniques in order to reconstruct HDR frames on the decoder. Originally designed for still HDR images, a fairly straightforward modification can lead to an effective backwards-compatible HDR video compression method. The primary stream consists of LDR frames, created using any given tone mapper. The secondary stream consists of frames created by taking the ratio between the luminance of the input HDR frame and the luminance of the corresponding LDR frame such that RI(x,y)=Lhdr(x,y)Lldr(x,y)si1_e. The luminance of both the HDR and LDR frame are computed using the BT/REC. 709 primaries. The ratio frame is subsequently log encoded, discretized (RI(x, y) ∈ [0, 255]) and subsampled in order to minimize storage requirements.

HDR MPEG, much like JPEG HDR, splits the input HDR frame into two streams. The primary LDR stream is created using a given tone mapping operator and passed through encoder and decoder blocks. Subsequently, both the input HDR and decoded LDR frames and passed through color transformation functions in order to transform both HDR and LDR frames into a similar color space. This transformation helps in the creation of a monotonically increasing reconstruction function (RF), which is subsequently used to predict an HDR frame from its LDR counterpart. Finally, the residual luma differences between the input HDR frame and the predicted HDR frame is quantized using a quantization function and stored as the secondary stream. The quantization function (QF) ensures the residual image is always in unsigned integer range. The algorithm also outputs an auxiliary stream, which stores the RF and QF values per frame to be subsequently used during reconstruction.

The rate distortion optimized HDR video compression method follows a method very similar to JPEG-HDR. The primary stream is created using a temporally coherent TMO, proposed by Lee and Kim [14], essentially a temporally coherent extension of the Fattal TMO [15], which also adds the option of managing color saturation of the LDR frames; although as with the other methods, other tone mapping functions could be used. A log-encoded ratio frame is computed such that RI(x,y)=log(Lhdr(x,y)Lldr(x,y))si2_e. Subsequently, the minima and maxima of each ratio frame is stored as an auxiliary data structure before the frames are scaled, such that RI(x, y) ∈ [0, 1] and filtered using a cross-bilateral filter [16] in order to reduce noise. Finally, the scaled and filtered ratio frames are discretized and passed to the encoder, and the auxiliary data structure is also stored as a separate file. On the decompression side, the primary frames and secondary frames are read back followed by a saturation correction of the primary frames and inverse scaling of the secondary frames, using information stored in the auxiliary data structure. Finally, the primary and secondary frames are multiplied to reconstruct the output HDR frames.

The primary advantage of the backwards compatible approaches is that the video files can be encoded and decoded using any legacy software/hardware based encoders and decoders, which means hardware support is easy to find and uptake can be quicker because part of the content can be successfully played on legacy LDR software and displays.

3 Practical Backwards Compatible Compression

The backwards compatible methods outlined above share the ability of being flexible, particularly as they enable tone mapping selection and parametrization. However, the choice of which tone mappers are best has been shown to be content dependent [17] and a number of studies do not show consensus for a best method [1]. Recent studies have shown that tone mapping may not even be the best method of displaying HDR data, with certain experiments demonstrating ambivalence of subjects for a single exposure, compared to tone mapped content [18, 19]. Furthermore, within a single image, different tone mappers may perform better than others, as the study and the subsequent hybrid tone mapping approach by Banterle et al. [20] have shown. This work, rather than offering more complex approaches to backward compatibility, presents an alternative based on a practical equation, which converts the HDR content into two separate streams that are then stored and easily retrieved. The advantage of such a method is that it is straightforward to implement, is computationally fast, and produces good results overall for both video and still image compression.

3.1 Encoding Method

Fig. 1 illustrates the general encoding approach used by pBC. This presents a robust, yet simple implementation of the generalized method described above. Two operations are performed initially, both aimed at fulfilling the variables in Eq. (1). Following the extraction of L(SHDR) (luma stream) and the computation of SLDR (backwards compatible stream) from Eq. (1), the two streams are prepared for legacy encoding, whereby any LDR encoder can be used to encode the two streams. pBC is based around computing the backwards compatible layer based on the following equation:

f06-01-9780128094778
Fig. 1 Encoding method. Pipeline illustrating the encoding method used by pBC.

SLDR=SHDRL(SHDR)+1,

si3_e  (1)

where SHDR is the original HDR image/frame and L() is a function that extracts the luminance of the HDR image/frame using any luminance computation equation such as L(SHDR) = 0.2126 × SHDRR + 0.7152 × SHDRG + 0.0722 × SHDRB where SHDRR, SHDRG, and SHDRB are, respectively, the red, green, and blue channels of the HDR image/frame. The equation is applied directly to each pixel. This function is similar to the sigmoid functions used in other methods, but has the unique characteristic of directly computing the color channels through the direct use of the HDR content and its luminance.

The resulting image/frame, SLDR is a full color image/frame that can be encoded with a legacy encoder and that can be viewed on a traditional display. SLDR is encoded as a stream or image and L(SHDR) is also encoded. L(SHDR) can be log encoded to conform with the human visual system’s perception of brightness which follows a logarithmic scale or any other encoding mechanism that reduces the range. The resulting L(SHDR) is also encoded using a traditional LDR legacy encoder. In the results presented in Section 4, log encoding of the luminance frame is used.

3.2 Channel Scaling

The dynamic range of SLDR can, on occasion, be larger than 1, a function can be applied to ensure the range is maintained between 0 and 1. In the case of the results presented here, a further computation occurs to the SLDR stream if the values of any of the channels of SLDR exceed 1. This is computed as

SCLDR=SLDR(1+SLDR×clamp((max(SLDR)*IQR(L(SHDR),0,1)),

si4_e  (2)

where SCLDR is the corrected LDR data, which will not exceed the value of 1. The clamp(value, min, max) function is used to clamp results to 1 when required. IQR(X) computes the interquartile range of the computed HDR luminance channel, and as defined as follows:

IQR(X)=median(XR)median(XL),

si5_e  (3)

where median() computes the median of a set of data, and XR is the subset of X, which take values greater than median(X). XL is defined analogously. If this step is applied, the corrected data SCLDR is stored as the LDR part instead of SLDR.

3.3 Decoding Method

The decoding method is illustrated in Fig. 2 for viewers/displays that support HDR video and Fig. 3 for LDR viewers/displays. The main difference between the two is that when decoding for HDR viewers, the backwards compatible stream is enhanced with the luma stream, while for the LDR viewers the backwards compatible stream is played directly.

f06-02-9780128094778
Fig. 2 Decoding method. Pipeline illustrating the decoding method used by pBC for HDR viewing.
f06-03-9780128094778
Fig. 3 Legacy decoding method. Pipeline illustrating the decoding method used by pBC for LDR viewing.

When enhancing the backwards compatible stream with the luma stream, Eq. (1) is inverted to reconstruct the HDR stream as SHDRDEC = SLDR × (L(SHDR) + 1). If required, Eq. (3) is also inverted and computed prior to inverting Eq. (1). When the luma stream is log encoded (as is the case for the results presented in the next section), then this is log decoded prior to computing any equations. Overall, the decoding process is efficient and easy to implement, making it amiable to hardware implementations.

4 Comparison Results

In order to demonstrate the potential of the method, results are demonstrated for HDR video and image compression, comparing the proposed method with other backwards compatible methods. For video compression, all the chosen methods are dual stream methods; furthermore, they all maintain some amount of meta data, which typically results in a few bytes per frame, which for these results are considered negligible. The backwards compatible methods used for testing the results are: pBC (Practical Backwards Compatible) the method being proposed here, the Rate Distortion method [13] (Rate Distortion), the Mantiuk et al. method [21] (MPEG HDR), the Ward and Simmons method [22] originally developed for still images, but that produces good results when extended for video (JPEG HDRv) and, finally, two versions of inverse tone mapping compression methods which use a tone mapper for encoding and an inverse tone mapper for decoding. The inverse methods are presented here as a method of demonstrating the superiority of pBC, when compared to the straightforward inverse tone mapping compression methods to which it may be considered comparable. One inverse method uses the Photographic Tone Mapping Reproduction operator [23] and its inverse [24] (Inverse (Reinhard)), and the second uses a sigmoid similar to the pBC method (Inverse (Sigmoid)) but without the distinguishing quality in Eq. (1) of applying both the hdr and lum(hdr) within the same equation.

Six HDR videos were chosen for computing results. A single frame from each video is shown in Fig. 4; this includes a description of the dynamic range of this content. These videos represent a wide range of HDR possibilities, including CGI (Base) and special effects (Tears). Due to the relatively new nature of HDR, there is no single established technique for comparing methods. Three metrics are therefore used to compute the quality of the resulting frames with the original frames, the traditional PSNR method, logPSNR, which attempts to take into account the logarithmic nature of the human visual system when computing differences, and finally, HDR VDP [25] which attempts to recreate the human visual systems response to differences in HDR images.

f06-04-9780128094778
Fig. 4 HDR videos. Scenes used for video results. Dynamic range (DR) calculated as log10(maxminmin)si6_e. (A) Mercedes; DR = 4.28. (B) Tears; DR = 5.54. (C) Jag; DR = 5.35. (D) Base; DR = 8.37. (E) Seine; DR = 6.30. (F) Weld; DR = 6.48.

4.1 Implementation

All the methods were implemented in Matlab using the same framework for most intermediate routines, not specific to an individual method. For each method, two LDR streams are produced and the content of these is explained for each method below. pBC is a direct implementation of the method described in Section 3. The main base stream of pBC contains the backwards compatible stream and the extension stream the luminance. The secondary stream is log encoded to account for the human visual systems response to luminance. JPEG HDRv is a video implementation of the JPEG HDR method [22], as it was seen to produce very good results for video. The method chosen for tone mapping was the Reinhard tone mapper [23], as it has been shown to perform well in psychophysics experiments [17]. The ratio stream is not limited in size, as is the case for still images in JPEG HDR, although the streams produced are typically very small; it is log encoded, as was the case of the extension layer for pBC. HDR MPEG makes use of a temporally-coherent video tone mapper [26], as our implementation using the traditional Photographic Tone Reproduction operator was found to produce poor results for this method. Rate Distortion [13] uses the temporal tone mapper used in the original paper [14]. The bit rate optimization introduced in this method is not employed, as the idea was to compare methods; arguably this optimization could be employed by other methods also. As mentioned above, Inverse (Reinhard) uses the Photographic Tone Mapping Reproduction operator and its inverse. The same sigmoid that inspired pBC is used for Inverse (Sigmoid). The secondary stream for both inverse methods consists of a residual computed from a reconstructed HDR frame’s difference from the original.

4.2 Method and Results

The HDR video compression comparison is based around Fig. 5. All HDR frames were encoded with one of the chosen methods, a YUV file was generated and a traditional legacy encoder was used to compress the two YUV files into legacy streams. High Efficiency Video Encoding (HEVC) was chosen as the means of legacy encoding for this set of comparisons, as it represents the upcoming standard for traditional video compression. The x265 application was used to represent HEVC. The settings were set as default maximum quality compression settings (very slow). Each method was encoded with different quantization parameters for each scene with settings of 40, 35, 30, 25, 20, 15, 10, 5, 2, 1. Once encoded, streams were decoded into YUV streams, which were then converted into streams that were decoded by the various HDR decoding methods, producing a series of HDR frames. The resultant HDR frames were compared with the original frames using the three metrics. Bitrates of the resultant encoded streams were used for computing output bitrates used in the results. The YUV subsampling format of 4:4:4 was chosen with 8-bit depth for both streams. The QP slices were also set equally for both streams. Results are subsequently averaged across all six scenes for each of the metrics.

f06-05-9780128094778
Fig. 5 Method. Results method used for computing results for all compression methods (CMs).

Fig. 6 shows results for the three metrics. pBC and HDR MPEG perform best overall across all three metrics; pBC performs better than HDR MPEG at PSNR for lower bitrates, and with all metrics at middle bitrates, and HDR MPEG performs best for higher bitrates. JPEG HDRv also performs well, as does Rate Distortion. Rate Distortion would probably perform better with the optimization method included. Both Inverse methods perform relatively poorly. Section 4.4 provides further analysis of the results.

f06-06-9780128094778
Fig. 6 HEVC HDR video results. Results showing the methods compared for a number of metrics averaged across six scenes.

4.3 Still Images

pBC could also be used for the compression of still HDR images. To demonstrate the approach, the same method used by Ward and Simmons for JPEG HDR [22] is adopted. In this method, a ratio image is encoded in the subband of the JPEG image, and the tone mapped image is stored in the body of the JPEG container. The ratio image is decreased to fit within the subband, typically reduced to <64 KB. pBC was adapted to use the same approach for the results presented in this section. The luminance content of the method is reduced to <64 KB and is stored in the subband. Results are presented for pBC, versus the original JPEG HDR method in Fig. 7 for the 20 images shown if Fig. 8 averaged across all 20 images for the previously used metrics. The images shown in Fig. 7 were all computed with pBC. Bits per pixel (bpp) were computed from the resultant encoded images. Quality parameters were set to 1, 4, 16, 32, 48, 64, 80, 96, and a final no compression value represented as 100. As with JPEG HDRv, the Reinhard tone mapper was used for JPEG HDR. Results again demonstrate a good performance for pBC overall.

f06-07-9780128094778
Fig. 7 HDR still image results. Results showing the methods compared for a number of metrics averaged across 20 images.
f06-08-9780128094778
Fig. 8 HDR images. Images used for still image results computed with pBC.

4.4 Discussion

While details of the methods may vary in terms of implementations, parameters used, tone mappers used, etc., which may modify the results, pBC performs relatively well for a straightforward method, and indicates that there is scope for methods that are unsophisticated. The Inverse methods demonstrate that this straightforward approach is not enough in isolation, and care must be taken with the design of a practical encoding method, which pBC has provided. HDR MPEG, JPEG HDR (and JPEG HDRv), and Rate Distortion, as well as the Inverse methods have the advantage of being able to use different tone mapping operators and therefore, offer more flexibility, but some of this comes at an added cost, potentially in the tone mappers used, for example, local tone mappers may be quite expensive and in the use of complex operators, for example, Rate Distortion makes use of a cross bilateral filter [16]. The Inverse methods lose some of the flexibility, as they require methods that are invertible. Issues like computational complexity play an important role in HDR transmission, which will be a requirement for HDR to succeed in the broadcast domain. With this plethora of options, we believe pBC has an opportunity to succeed as a practical straightforward method that produces good consistent results with little complexities.

5 Conclusions

This work has presented a straightforward and practical backwards compatible method for HDR encoding of both images and video. As HDR content becomes more popular and is embraced by a larger audience, we suspect that the need for methods that are not too complex to understand and handle become more common place. Our method, pBC, has compared favorably with other similar methods, in terms of results over a number of metrics. While the other, more established techniques have more flexibility as part of their main characteristics, we hope that pBC offers a solution that may be adopted as an alternative to such methods.

References

[1] Banterle F., Artusi A., Debattista K., Chalmers A. Advanced High Dynamic Range Imaging: Theory and Practice. CRC Press, Natick, MA; 2011.

[2] Reinhard E., Heidrich W., Pattanaik S., Debevec P., Ward G., Myszkowski K. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, San Francisco; 2010.650.

[3] Ward G. The RADIANCE lighting simulation and rendering system. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques—SIGGRAPH’94. New York: ACM Press; 1994:459–472.

[4] Kainz F., Bogart R. Technical introduction to OpenEXR. 2009.

[5] Ward G. LogLuv encoding for full-gamut, high-dynamic range images. J. Graph. Tools. 1998;3(1):15–31.

[6] Mantiuk R., Krawczyk G., Myszkowski K., Seidel H.-P. Perception-motivated high dynamic range video encoding. ACM Trans. Graph. 2004;23(3):733.

[7] Garbas J.-U., Thoma H. Temporally coherent luminance-to-luma mapping for high dynamic range video coding with H.264/AVC. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2011:829–832.

[8] Motra A., Thoma H. An adaptive Logluv transform for High Dynamic Range video compression. In: 2010 IEEE International Conference on Image Processing. 2010:2061–2064.

[9] Zhang Y., Reinhard E., Bull D. Perception-based high dynamic range video compression with optimal bit-depth transformation. In: 18th IEEE International Conference on Image Processing (ICIP). 2011:1321–1324.

[10] Lloyd S. Least squares quantization in PCM. IEEE Tran. Inform. Theory. 1982;28(2):129–137.

[11] Ward G. JPEG-HDR: a backwards-compatible, high dynamic range extension to JPEG. In: ACM SIGGRAPH 2005 Courses. 2005:8.

[12] Mantiuk R., Efremov A., Myszkowski K., Seidel H.-P. Backward compatible high dynamic range MPEG video compression. ACM Trans. Graph. 2006;25(3):713–723.

[13] Lee C., Kim C.S. Rate-distortion optimized compression of high dynamic range videos. In: Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008). 2008.

[14] Lee C., Kim C.-S. Gradient domain tone mapping of High Dynamic Range videos. In: 2007 IEEE International Conference on Image Processing. IEEE; 2007:461–464.

[15] Fattal R., Lischinski D., Werman M. Gradient domain high dynamic range compression. ACM Trans. Graph. 2002;21(3):249–256.

[16] Eisemann E., Durand F. Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. 2004;23:673–678.

[17] Ledda P., Chalmers A., Troscianko T., Seetzen H. Evaluation of tone mapping operators using a High Dynamic Range display. In: in: ACM SIGGRAPH 2005 Papers on—SIGGRAPH ’05. 2005:640–648.

[18] Akyüz A.O., Reinhard Erik. Noise reduction in high dynamic range imaging. J. Visual Commun. Image Represen. 2007;18(5):366–376.

[19] Narwaria M., Silva M.P.D., Le Callet P., Pepion R. Single exposure vs tone mapped High Dynamic Range images: a study based on quality of experience. In: 2013 Proceedings of the 22nd European Signal Processing Conference (EUSIPCO). IEEE; 2014:2140–2144.

[20] Banterle F., Artusi A., Sikudova E., Bashford-Rogers T., Ledda P., Bloj M., Chalmers A. Dynamic range compression by differential zone mapping based on psychophysical experiments. In: Proceedings of the ACM Symposium on Applied Perception. ACM; 2012:39–46.

[21] Mantiuk R., Efremov A., Myszkowski K. Design and evaluation of backward compatible high dynamic range video compression. 2006.

[22] Ward G., Simmons M. Subband encoding of high dynamic range imagery. In: Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization—APGV ’04. New York: ACM Press; 2004:83.

[23] Reinhard E., Stark M., Shirley P., Ferwerda J. Photographic tone reproduction for digital images. ACM Trans. Graph. 2002;21(3):267–276.

[24] Banterle F., Ledda P., Debattista K., Chalmers A. Inverse tone mapping. In: Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia—GRAPHITE ’06. 2006:349.

[25] Mantiuk R., Kim K.J., Rempel A.G., Heidrich W. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. In: ACM SIGGRAPH 2011 Papers. ACM, New York; 2011:40:1–40:14.

[26] Kiser C., Reinhard E., Tocci M., Tocci N. Real time automated tone mapping system for HDR video. In: IEEE International Conference on Image Processing. 2012:2749–2752.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset