Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy

被引:0
|
作者
Chen, Yili [1 ,2 ]
Wan, Minjie [1 ,2 ]
Xu, Yunkai [1 ,2 ]
Cao, Xiqing [3 ,4 ]
Zhang, Xiaojie [3 ,4 ]
Chen, Qian [1 ,2 ]
Gu, Gouhua [1 ,2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Jiangsu Key Lab Spectral Imaging & Intelligent Sen, Nanjing 210094, Peoples R China
[3] Shanghai Aerosp Control Technol Inst, Shanghai 201109, Peoples R China
[4] Infrared Detect Technol Res & Dev Ctr, Shanghai 201109, Peoples R China
基金
中国国家自然科学基金;
关键词
QUALITY ASSESSMENT; PERFORMANCE; FRAMEWORK; DEEP; NEST;
D O I
10.1364/JOSAA.473908
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Infrared and visible image fusion aims to reconstruct fused images with comprehensive visual information by merging the complementary features of source images captured by different imaging sensors. This tech-nology has been widely used in civil and military fields, such as urban security monitoring, remote sensing measurement, and battlefield reconnaissance. However, the existing methods still suffer from the preset fusion strategies that cannot be adjustable to different fusion demands and the loss of information during the fea -ture propagation process, thereby leading to the poor generalization ability and limited fusion performance. Therefore, we propose an unsupervised end-to-end network with learnable fusion strategy for infrared and visible image fusion in this paper. The presented network mainly consists of three parts, including the fea -ture extraction module, the fusion strategy module, and the image reconstruction module. First, in order to preserve more information during the process of feature propagation, dense connections and residual con-nections are applied to the feature extraction module and the image reconstruction module, respectively. Second, a new convolutional neural network is designed to adaptively learn the fusion strategy, which is able to enhance the generalization ability of our algorithm. Third, due to the lack of ground truth in fusion tasks, a loss function that consists of saliency loss and detail loss is exploited to guide the training direction and bal-ance the retention of different types of information. Finally, the experimental results verify that the proposed algorithm delivers competitive performance when compared with several state-of-the-art algorithms in terms of both subjective and objective evaluations. Our codes are available at https://github.com/ MinjieWan/Unsupervised-end-to-end-infrared-and-visible-image-fusion-network-using-learnable-fusion-strategy. (c) 2022 Optica Publishing Group
引用
收藏
页码:2257 / 2270
页数:14
相关论文
共 50 条
  • [1] IVJDN: AN END-TO-END NETWORK FOR JOINT INFRARED AND VISIBLE IMAGE FUSION AND DETECTION
    Zhang, Chenglong
    Ran, Qinglin
    Wei, Wei
    Ding, Chen
    Zhang, Lei
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 6534 - 6537
  • [2] VIFNet: An end-to-end visible-infrared fusion network for image dehazing
    Yu, Meng
    Cui, Te
    Lu, Haoyang
    Yue, Yufeng
    [J]. NEUROCOMPUTING, 2024, 599
  • [3] End-to-End Infrared and Visible Image Fusion Method Based on GhostNet
    Cheng C.
    Wu X.
    Xu T.
    [J]. Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2021, 34 (11): : 1028 - 1037
  • [4] FDNet: An end-to-end fusion decomposition network for infrared and visible images
    Di, Jing
    Ren, Li
    Liu, Jizhao
    Guo, Wenqing
    Zhange, Huaikun
    Liu, Qidong
    Lian, Jing
    [J]. PLOS ONE, 2023, 18 (09):
  • [5] An end-to-end multi-scale network based on autoencoder for infrared and visible image fusion
    Hongzhe Liu
    Hua Yan
    [J]. Multimedia Tools and Applications, 2023, 82 : 20139 - 20156
  • [6] An end-to-end multi-scale network based on autoencoder for infrared and visible image fusion
    Liu, Hongzhe
    Yan, Hua
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (13) : 20139 - 20156
  • [7] An end-to-end based on semantic region guidance for infrared and visible image fusion
    Han, Guijin
    Zhang, Xinyuan
    Huang, Ya
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (01) : 295 - 303
  • [8] An end-to-end based on semantic region guidance for infrared and visible image fusion
    Guijin Han
    Xinyuan Zhang
    Ya Huang
    [J]. Signal, Image and Video Processing, 2024, 18 (1) : 295 - 303
  • [9] Siam-AUnet: An end-to-end infrared and visible image fusion network based on gray histogram
    Yang, Xingkang
    Li, Yang
    Li, Dianlong
    Wang, Shaolong
    Yang, Zhe
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2024, 141
  • [10] MEFuse: end-to-end infrared and visible image fusion method based on multibranch encoder
    Hong, Yulu
    Wu, Xiao-Jun
    Xu, Tianyang
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (03)