Infrared and visible image fusion based on WEMD and generative adversarial network reconstruction

被引:0
|
作者
Yang Y. [1 ]
Gao X. [1 ]
Dang J. [1 ]
Wang Y. [1 ]
机构
[1] School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou
关键词
Generative adversarial network; Image fusion; Infrared and visible image; Window empirical mode decomposition;
D O I
10.37188/OPE.20223003.0320
中图分类号
学科分类号
摘要
To overcome the problem of blurred edges and low contrast in the fusion of infrared and visible images, a two-dimensional window empirical mode decomposition (WEMD) and infrared and visible light image fusion algorithm for GAN reconstruction was proposed. The infrared and visible light images were decomposed using WEMD to obtain the intrinsic mode function components (IMF) and residual components. The IMF components were fused through principal component analysis, and the residual components were fused by the weighted average. The preliminary fused image was reconstructed and input into the GAN to play against the visible light image, and some background information was supplemented to obtain the final fusion image. The average gradient (AG), edge strength (EI), entropy (EN), structural similarity (SSIM), and mutual information (MI) are used for objective evaluation, and they increased by 46.13%, 39.40%, 19.91%, 3.72%, and 33.10%, respectively, compared with the other five methods. The experimental results show that the proposed algorithm achieves better retention of the edge and texture details of the sources image while simultaneously highlighting the target of the infrared image, has better visibility, and has obvious advantages in terms of objective evaluation indicators. © 2022, Science Press. All right reserved.
引用
收藏
页码:320 / 330
页数:10
相关论文
共 25 条
  • [1] LIU X H, CHEN ZH B, QIN M Z., Infrared and visible image fusion using guided filter and convolutional sparse representation, Optics and Precision Engineering, 26, 5, pp. 1242-1253, (2018)
  • [2] SINGH R, VATSA M, NOORE A., Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition, Pattern Recognition, 41, 3, pp. 880-893, (2008)
  • [3] HAN J, BHANU B., Fusion of color and infrared video for moving human detection, Pattern Recognition, 40, 6, pp. 1771-1784, (2007)
  • [4] REINHARD E, ADHIKHMIN M, GOOCH B, Et al., Color transfer between images, IEEE Computer Graphics and Applications, 21, 5, pp. 34-41, (2001)
  • [5] FENG W, WU G M, ZHAO D X, Et al., Multi images fusion Retinex for low light image enhancement, Optics and Precision Engineering, 28, 3, pp. 736-744, (2020)
  • [6] YIN M, DUAN P H, CHU B, Et al., Fusion of infrared and visible images combined with NSDTCT and sparse representation, Optics and Precision Engineering, 24, 7, pp. 1763-1771, (2016)
  • [7] ALI S S, RIAZ M M, GHAFOOR A., Fuzzy logic and additive wavelet-based panchromatic sharpening, IEEE Geoscience and Remote Sensing Letters, 11, 1, pp. 357-360, (2014)
  • [8] CHEN G, LI L, JIN W Q, Et al., Weighted sparse representation and gradient domain guided filter pyramid image fusion based on low-light-level dual-channel camera, IEEE Photonics Journal, 11, 5, pp. 1-15, (2019)
  • [9] CHOI M, KIM R Y, NAM M R, Et al., Fusion of multispectral and panchromatic Satellite images using the curvelet transform, IEEE Geoscience and Remote Sensing Letters, 2, 2, pp. 136-140, (2005)
  • [10] HUANG N E, SHEN Z, LONG S R, Et al., The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis, Proceedings of the Royal Society of London Series A: Mathematical, Physical and Engineering Sciences, 454, 1971, pp. 903-995, (1998)