Infrared and Visible Image Fusion Based on Improved Dual Path Generation Adversarial Network

被引:0
|
作者
Yang, Shen [1 ]
Tian, Lifan [1 ]
Liang, Jiaming [1 ]
Huang, Zefeng [1 ]
机构
[1] Wuhan Univ Sci & Technol, Sch Informat Sci & Engn, Wuhan 430081, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Deep learning; Generate Adversarial Network(GAN); Infrared image; Visible image; PERFORMANCE;
D O I
10.11999/JEIT220819
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
An end-to-end dual fusion path Generation Adversarial Network (GAN) is proposed to preserve more information from the source image. Firstly, in the generator, a double path dense connection network with the same structure and independent parameters is used to construct the infrared difference path and the visible difference path to improve the contrast of the fused image, and the channel attention mechanism is introduced to make the network focus more on the typical infrared targets and the visible texture details; Secondly, two source images are directly input into each layer of the network to extract more source image feature information; Finally, considering the complementarity between the loss functions, the difference intensity loss function, the difference gradient loss function and the structural similarity loss function are added to obtain a more contrast fused image. Experiments show that, compared with a Generative Adversarial Network with Multi-classification Constraints (GANMcC), Residual Fusion network for infrared and visible images (RFnest) and other related fusion algorithms, the fusion image obtained by this method not only achieves the best effect in multiple evaluation indicators, but also has better visual effect and is more in line with human visual perception.
引用
下载
收藏
页码:3012 / 3021
页数:10
相关论文
共 25 条
  • [1] Infrared and visible image fusion with supervised convolutional neural network
    An, Wen-Bo
    Wang, Hong-Mei
    [J]. OPTIK, 2020, 219
  • [2] [陈永 Chen Yong], 2022, [光学精密工程, Optics and Precision Engineering], V30, P2253
  • [3] Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition
    Cui, Guangmang
    Feng, Huajun
    Xu, Zhihai
    Li, Qi
    Chen, Yueting
    [J]. OPTICS COMMUNICATIONS, 2015, 341 : 199 - 209
  • [4] Image quality measures and their performance
    Eskicioglu, AM
    Fisher, PS
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) : 2959 - 2965
  • [5] A Dual-branch Network for Infrared and Visible Image Fusion
    Fu, Yu
    Wu, Xiao-Jun
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10675 - 10680
  • [6] Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples
    Gao, Yuan
    Ma, Jiayi
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (05) : 2545 - 2560
  • [7] Image fusion: Advances in the state of the art
    Goshtasby, A. Ardeshir
    Nikolov, Stavri
    [J]. INFORMATION FUSION, 2007, 8 (02) : 114 - 118
  • [8] A new image fusion performance metric based on visual information fidelity
    Han, Yu
    Cai, Yunze
    Cao, Yin
    Xu, Xiaoming
    [J]. INFORMATION FUSION, 2013, 14 (02) : 127 - 135
  • [9] Multimodal medical image fusion based on IHS and PCA
    He, Changtao
    Liu, Quanxi
    Li, Hongliang
    Wang, Haixu
    [J]. 2010 SYMPOSIUM ON SECURITY DETECTION AND INFORMATION PROCESSING, 2010, 7 : 280 - 285
  • [10] MULTISENSOR IMAGE FUSION USING THE WAVELET TRANSFORM
    LI, H
    MANJUNATH, BS
    MITRA, SK
    [J]. GRAPHICAL MODELS AND IMAGE PROCESSING, 1995, 57 (03): : 235 - 245