LatRAIVF: An Infrared and Visible Image Fusion Method Based on Latent Regression and Adversarial Training

被引:7
|
作者
Luo, Xiaoqing [1 ]
Wang, Anqi [1 ]
Zhang, Zhancheng [2 ]
Xiang, Xinguang [3 ]
Wu, Xiao-Jun [1 ]
Wu, Xiao-Jun [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Jiangsu, Peoples R China
[2] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
[3] Nanjing Univ Sci & Technol, Key Lab Informat Percept & Syst Publ Secur MIIT, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning (DL); generative adversarial networks (GANs); image fusion; infrared and visible image; latent space regression; QUALITY ASSESSMENT; FRAMEWORK;
D O I
10.1109/TIM.2021.3105250
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this article, we propose a novel method for infrared and visible image fusion based on latent regression and adversarial training, which is named as LatRAIVF. Compared to existing deep learning (DL)-based image fusion method that only focuses on the spatial information, we consider to utilize the information provided by high-level feature maps from latent space, which can guide the network to learn about semantically important feature information. The proposed method is based on the framework of conditional generative adversarial network (GAN), and two encoders are adopted to learn the respective semantic latent representations for the infrared and visible images, which are then combined by max-selection strategy and input into the decoder, with skip connections between the corresponding layers of the encoder and the decoder, to achieve the fused image. Apart from the adversarial process that enables the fused image to obtain more realistic details, we design two branches to constrain the generation of the image: a content loss to make the fused image close to the label image, and a latent regression loss to ensure the fused image with salient features from the infrared and visible images. Due to the lack of physical ground-truth fused images in public infrared and visible image datasets and the difficulties in defining desired fused image, we make use of existing RGB-D dataset to synthesize an infrared and visible image dataset with ground truths based on the widely used optical model for better network training. Comparison experiments show that the fused results of the proposed method can transfer meaningful features from the source image and provide good fusion quality.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Infrared and visible image fusion via detail preserving adversarial learning
    Ma, Jiayi
    Liang, Pengwei
    Yu, Wei
    Chen, Chen
    Guo, Xiaojie
    Wu, Jia
    Jiang, Junjun
    INFORMATION FUSION, 2020, 54 : 85 - 98
  • [32] Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network
    Xu, Dongdong
    Wang, Yongcheng
    Xu, Shuyan
    Zhu, Kaiguang
    Zhang, Ning
    Zhang, Xin
    APPLIED SCIENCES-BASEL, 2020, 10 (02):
  • [33] Infrared and Visible Image Fusion Method Based on NSST and Guided Filtering
    Zhou Jie
    Li Wenjuan
    Zhang Peng
    Luo Jun
    Li Sijing
    Zhao Jiong
    ICOSM 2020: OPTOELECTRONIC SCIENCE AND MATERIALS, 2020, 11606
  • [34] Infrared and visible image fusion method based on hierarchical attention mechanism
    Li, Qinghua
    Yan, Bao
    Luo, Delin
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [35] DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion
    Yi, Shi
    Li, Junjie
    Yuan, Xuesong
    INFRARED PHYSICS & TECHNOLOGY, 2021, 119
  • [36] Infrared and visible image fusion based on edge-preserving and attention generative adversarial network
    Zhu Wen-Qing
    Tang Xin-Yi
    Zhang Rui
    Chen Xiao
    Miao Zhuang
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2021, 40 (05) : 696 - 708
  • [37] Colorization of fusion image of infrared and visible images based on parallel generative adversarial network approach
    Chen, Lei
    Han, Jun
    Tian, Feng
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (01) : 2255 - 2264
  • [38] AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Feng, Qi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 1383 - 1396
  • [39] A Generative Adversarial Network with Dual Discriminators for Infrared and Visible Image Fusion Based on Saliency Detection
    Zhang, Dazhi
    Hou, Jilei
    Wu, Wei
    Lu, Tao
    Zhou, Huabing
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [40] Infrared and Visible Image Fusion Objective Evaluation Method
    Ledwon, Daniel
    Juszczyk, Jan
    Pietka, Ewa
    INFORMATION TECHNOLOGY IN BIOMEDICINE, 2019, 1011 : 268 - 279