When the SD value is high, it indicates a fused image at high contrast:

SD=1MNi=0M1j=0N1(I(i,j)I¯)2,(6.54)

where I and Ī are respectively the fused and mean of the fused image whose size is M × N.

6.4.1.6Root-mean-square error

The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is used to measure the contrasts between qualities anticipated by a model or an estimator and the qualities actually observed. The RMSD represents the sample SD of the differences between observed values and predicted values. The individual differences are known as residuals, where the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSD serves to sum up the magnitude of the errors in predictions for various times into a single measure of predictive power. RMSD is a decent measure of exactness, but only to compare the future errors of various models for a specific factors and not between variables, as it is scale-dependent.

RSME is a good indicator of the spectral quality of fused images.

RSME=1MNi=0Mj=0N(R(i,j)F(i,j))2,(6.55)

where R and F are reference and fused images of size M × N.

6.4.1.7Peak signal-to-noise ratio

Peak signal-to-noise ratio, often abbreviated PSNR, is a term for finding the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Since many signals have a wide element run, PSNR is typically communicated in terms of the logarithmic decibel scale.

Since image enhancement or improving the visual quality of a digital image can be subjective, conveying that one method provides a better quality image could vary from person to person. Therefore, it is a must to build up quantitative and observational measures to analyse the impacts of image enhancement algorithms on image quality. When the value is high, the fused and reference images are similar. A higher value indicates superior fusion.

PSNR=20log10(L21MNi=0Mj=0N(R(i,j)F(i,j))2),(6.56)

where R and F are reference and fused images of size M × N and L is the number of grey levels in the image.

6.4.1.8Mutual information

A fused image should contain important information from the original image set. Clearly the idea of ‘critical data’ relies on the application and is hard to characterize. Mutual information (MI) is the measure of data that one datum contains about another.

MI measures the degree of dependency between two variables A and B by evaluating the distant value between the joint distribution pAB(a, b) and the associated probability value p A(a)⋅pB(b) has the complete independence by means of the relative entropy:

IAB(a;b)=D(pAB(a,b)||pA(a)pB(b)),(6.57)
IAB(a;b)=a,bpAB(a,b)log2pAB(a,b)pA(a)pB(b).(6.58)

However, to calculate MI, the joint probability distribution function should be known. The joint and marginal distribution p AB(a, b), pA(a) and p B(b) are simply obtained by normalization of the joint and marginal histograms of both images. The joint histogram of images A and B is defined as

hAB(a,b)=m=1Mn=1NhAB(IA(m,n),IB(m,n));a=0,I;b=0,J,(6.59)

where M × N is the size of the images, IA(m, n) and IB(m, n) are the intensity values of pixel (m, n) of images A and B, so that IA(m, n) ∈ [0, I] and IB(m, n) ∈ [0, J]. Considering the source images A and B and the fused image F, the amount of information that F contains about A and B is estimated as

MIFA(f;a)=a,fpAF(a,f)logpAF(a,f)pA(a)pF(f),(6.60)
MIFB(f;b)=b,fpBF(b,f)logpBF(b,f)pB(b)pF(f).(6.61)

Consequently, the image fusion performance measure can be defined as

MIFAB=MIFA(f;a)+MIFB(f;b).(6.62)

6.5Experimental results

In this section we will show the image fusion results of several image fusion methods and their evaluation based on the aforementioned image quality metrics. We have used DWT for the decomposition of images into low-frequency and high-frequency subbands in both proposed methods. Integrated PCNN and PCA with a CS method refers to the proposed image fusion method based on PCNN and PCA in a CS framework. Existing NSCT and PCNN methods refer to image fusion methods described by H. Yin et al. [4].

From Tables 6.1, 6.2, 6.3, 6.4 and 6.5 and Figures 6.11, 6.12, 6.13, 6.14 and 6.15 we can infer that the proposed method of integrated PCNN and PCA with CS i.e., image fusion based on PCNN and PCA in a CS framework, gives better results than all other methods. We see that the spatial domain image fusion method that is PCA-based fusion does not perform well in terms of edge retention and also fails in terms of image contrast quality. Image contrast quality is increased in the proposed method since in the proposed image fusion method we directly process the pixel values of source images. Therefore the transform domain–based fusion was developed, and from the results it is clear that it is better than the spatial domain image fusion method. But the existing fusion method suffers from drawbacks like dissimilarity between fused and original images and also gives poor results in terms of PSNR values and requires high storage capacity. These artefacts were removed using the CS-based fusion method where information content is conserved in the fused image. It is also computationally efficient and less expensive. But beyond its good performance, this method also suffers from artefacts like noise and fused images with poor fidelity. The CS method fails in terms of mutual information since there were no proper fusion rules for high-pass-frequency components. Hence it also suffers in preserving the gradient information in fused images. Therefore there was a need for separate image fusion rules for fusing low-pass- and high-pass-frequency components, so that the different information present in both subbands is separately preserved in the fused images. PCNN-based image fusion turns out to be the best method for fusing low-pass-frequency components since it takes account of neighbour information. PCA-based fusion turns out to be the best method for fusing high-pass-frequency components since it takes into account the variance information of the subband that effectively preserves the edges and contours. Additionally, a CS algorithm was used for the perfect and accurate reconstruction of fused images and saves lots of storage and has the maximum speed of operation. Therefore the proposed method that was based on PCNN and PCA incorporated into a CS framework can also be used for the fusion of coloured images, and its performance is better than that of conventional methods. The results were related to those of existing methods and a comparative analysis showed that the proposed method gave better results in terms of both visual quality and information content of the fused images.

Fig. 6.11: Image fusion in CS and PCNN – aerial view of England: (a) MS image (b) PCNN (c) CS (d) Fused image

Tab. 6.1: Performance evaluation table of aerial view of England

Fig. 6.12: Image fusion in CS and PCNN – aerial view of forested land: (a) MS image (b) PCNN (c) CS (d) Fused image

Tab. 6.2: Performance evaluation table of aerial view of forested land

Fig. 6.13: Image fusion in CS and PCNN – aerial view of Egypt: (a) MS image (b) PCNN (c) CS (d) Fused image

Tab. 6.3: Performance evaluation table of aerial view of Egypt

Fig. 6.14: Image fusion in CS and PCNN – aerial view of Balboa: (a) MS image (b) PCNN (c) CS (d) Fused image

Tab. 6.4: Performance evaluation table of aerial view of Balboa

Fig. 6.15: Image fusion in CS and PCNN – aerial view of bare soil: (a) MS image (b) PCNN (c) CS (d) Fused image

Tab. 6.5: Performance evaluation table of aerial view of bare soil

6.6Conclusion and future work

In this work, the concept of image fusion using PCNN and a CS sampling method were proposed. Image fusion applications were also discussed. A basic image fusion techniques was presented and implemented for multifocus image fusion, which includes a PCA-based fusion method, wavelet-based image fusion and so forth. Image fusion was done for the set of multifocus raw images. CS image fusion nowadays plays an important role. We also explained how PCNN- and CS-based image fusion methods are different from common image fusion methods. Integrated image fusion methods are proposed that are image fusion using PCNN and PCA with and without a CS framework. From the results it is clearly seen that the proposed image fusion method is better than the other image fusion methods. The proposed methods can also be used for the fusion of colour images. The framework was adopted with the maximum gradient that depends on objective fusion of input gradients in fused images. The proposed work is compared with a big data list of remotely sensed data in England, forested land, Egypt, Island, Balboa and bare soil, with their performance measures like QAB/F, entropy, SD, RMSE, PSNR and MI investigated. The results of all generated objective matrices are compared and analysed for all image fusion methods and existing image fusion methods. We saw that the proposed method based on CS image fusion gives the best performance among all image fusion methods. Image registration has a significant contribution to make in the enhancement of image fusion methods. In future, the focus should be on a multisensor image fusion method based on CS and PCNN. In future, the CS-based video fusion can also be done.

Acknowledgment: I express my gratitude to GOD ALMIGHTY for helping me in times of distress and need. I thank Dr. Deepak Mishra, Associate Professor in the department of avionics, Indian Institute of Space Technology, Trivandrum. Who gave opportunity to work with the remote sensing data for experimenting. I am indebted to Indian collaborators Dr. D. Narain Ponraj and Mr. X. Ajay Vasanth, Assistant Professor in KITS Coimbatore, India and Foreign collaborators Dr. Lawrence Henesey, Professor from Blekine Institute of Technology, Sweden and Dr. Chiung Ching Ho, Professor from Multimedia University, Malaysia who helped me to complete this dissertation as well as the challenging research that lies behind it and their meticulous guidance made this research successful.

Bibliography

[1]Xydeas CS and Petrovic V. Objective image fusion performance measure. Electronics Letters, 36(4):308–309, 2000.

[2]Wei S and Ke W. A multi-focus image fusion algorithm with dt-cwt. International Conference on Computational Intelligence and Security, pages 147–151, 2007.

[3]Pohl C and van Genderen J. Multisensor image fusion in remote sensing: Concepts, methods and applications. International Journal of Remote Sensing, 19:823–854, 03 1998.

[4]Yin H, Liu Z, Fang B, and Li Y. A novel image fusion approach based on compressive sensing. Optics Communications, 354(Supplement C):299–313, 2015.

[5]Needell D and Tropp JA. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, 2009.

[6]Xiaobo Q. Image fusion algorithm based on spatial frequency motivated pulse coupled neural networks in nsct domain. Acta Autom. Sinica, 34:1508–1514, 01 2008.

[7]Tian Y and Tian X. Remote sensing image fusion based on orientation information in nonsub-sampled contourlet transform domain. In International Conference on Advanced Electronic Science and Technology, published, pages 57–63. Atlantic Press, 2016.

[8]Chen Y and Qin Z. Pcnn-based image fusion in compressed domain. Mathematical Problems in Engineering, Hindawi Publishing Corporation, 2015.

[9]Pandit VR and Bhiwani RJ. Image fusion in remote sensing applications: A review. International Journal of Computer Applications, 120(10):22–32, June 2015.

[10]Li H, Ding W, Cao X, and Liu C. Image registration and fusion of visible and infrared integrated camera for medium-altitude unmanned aerial vehicle remote sensing. Remote Sensing, 9(5):441, 2017.

[11]Yang Y, Tong S, Huang S, Lin P, and Fang Y. A hybrid method for multi-focus image fusion based on fast discrete curvelet transform. IEEE Translations and Content Mining, 2017.

[12]Rajalingam B and Priya R. Multimodality medical image fusion based on hybrid fusion techniques. International Journal on Engineering and Manufacturing Science, 7(1):22–29, 2017.

[13]Biswas B, Sen BK, and Choudhuri R. Remote sensing image fusion using pcnn model parameter estimation by gamma distribution in shearlet domain. Procedia Computer Science, 70:304–310, 2015.

[14]Li J, Song M, and Peng Y. Infrared and visible image fusion based on robust principal component analysis and compressed sensing. Infrared Physics & Technology, 89:129–139, 2018.

[15]Selesnick IW, Baraniuk RG, and Kingsbury NC. The dual-tree complex wavelet transform. IEEE Signal Processing Magazine, 22(6):123–151, 2005.

[16]Eckhorn R, Reitbock HJ, Arndt M, and Dicke P. A neural network for feature linking via synchronous activity: Results from cat visual cortex and from simulations. In Cotterill RMJ, editor, Models of Brain Function. Cambridge University Press, 1989.

[17]Ranganath HS, Kuntimad G, and Johnson JL. Pulse coupled neural networks for image processing. In Southeastcon ’95. Visualize the Future., Proceedings., IEEE, pages 37–43, 1995.

[18]Baig MY, Lai EMK, and Punchihewa A. Compressed sensing-based distributed image compression. Applied Sciences, 4(2):128–147, 2014.

[19]Unni VS. Design and implementation of multifocus image fusion algorithms in compressive sensing framework. Master’s thesis, Indian Institute of Science, Trivandram, 2014.

[20]Islam SMR, Huang X, and Ou KL. Image compression based on compressive sensing using wavelet lifting scheme. The International Journal of Multimedia and Its Applications, 7(1):1–16, 2015.

[21]Baraniuk RG, Cevher V, Duarte MF, and Hegde C. Model-based compressive sensing. IEEE Transactions on Information Theory, 56(4):1982–2001, 2010.

[22]Baraniuk RG. Compressive sensing [lecture notes]. IEEE Signal Procesing Magazine, 24(4):118–121, 2007.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset