Infrared and visible image fusion based on residual dense network and gradient loss

被引:14
|
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [1 ,3 ]
Kasabov, Nikola K. [4 ,5 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1010, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Londonderry BT52 1SA, North Ireland
基金
中国国家自然科学基金;
关键词
Image fusion; Unsupervised learning; End-to-end model; Infrared image; Visible image; MULTI-FOCUS; TRANSFORM;
D O I
10.1016/j.infrared.2022.104486
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] MCnet: Multiscale visible image and infrared image fusion network
    Sun, Le
    Li, Yuhang
    Zheng, Min
    Zhong, Zhaoyi
    Zhang, Yanchun
    SIGNAL PROCESSING, 2023, 208
  • [42] Infrared and Visible Image Fusion Method Based on Information Enhancement and Mask Loss
    Zhang, Xiaodong
    Wang, Shuo
    Gao, Shaoshu
    Wang, Xinrui
    Zhang, Long
    Guangzi Xuebao/Acta Photonica Sinica, 2024, 53 (09):
  • [43] SEDRFuse: A Symmetric Encoder-Decoder With Residual Block Network for Infrared and Visible Image Fusion
    Jian, Lihua
    Yang, Xiaomin
    Liu, Zheng
    Jeon, Gwanggil
    Gao, Mingliang
    Chisholm, David
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [44] An Efficient Network Model for Visible and Infrared Image Fusion
    Pan, Zhu
    Ouyang, Wanqi
    IEEE ACCESS, 2023, 11 : 86413 - 86430
  • [45] Multigrained Attention Network for Infrared and Visible Image Fusion
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Sui, Chenhong
    Liu, Zhao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [46] CFNet: An infrared and visible image compression fusion network
    Xing, Mengliang
    Liu, Gang
    Tang, Haojie
    Qian, Yao
    Zhang, Jun
    PATTERN RECOGNITION, 2024, 156
  • [47] SimpliFusion: a simplified infrared and visible image fusion network
    Liu, Yong
    Li, Xingyuan
    Liu, Yong
    Zhong, Wei
    VISUAL COMPUTER, 2024,
  • [48] Infrared and Visible Image Fusion via Decoupling Network
    Wang, Xue
    Guan, Zheng
    Yu, Shishuang
    Cao, Jinde
    Li, Ya
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [49] Image Fusion Method for Infrared and Visible Light Images based on SWT and Regional Gradient
    Deng, Yi
    Li, Chanfei
    Zhang, Zili
    Wang, Dan
    2017 IEEE 3RD INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC), 2017, : 976 - 979
  • [50] StyleFuse: An unsupervised network based on style loss function for infrared and visible fusion
    Cheng, Chen
    Sun, Cheng
    Sun, Yongqi
    Zhu, Jiahui
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2022, 106