Infrared and visible image fusion based on residual dense network and gradient loss

被引:14
|
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [1 ,3 ]
Kasabov, Nikola K. [4 ,5 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1010, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Londonderry BT52 1SA, North Ireland
基金
中国国家自然科学基金;
关键词
Image fusion; Unsupervised learning; End-to-end model; Infrared image; Visible image; MULTI-FOCUS; TRANSFORM;
D O I
10.1016/j.infrared.2022.104486
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Infrared and visible image fusion based on gradient transfer and auto-encoder
    Li, Yan-Feng
    Liu, Ming-Yang
    Hu, Jia-Ming
    Sun, Hua-Dong
    Meng, Jie-Yu
    Wang, Ao-Ying
    Zhang, Han-Yue
    Yang, Hua-Min
    Han, Kai-Xu
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2024, 54 (06): : 1777 - 1787
  • [32] Infrared and visible image fusion based on alternating gradient filter and improved PCNN
    Yang Y.
    Pei P.
    Dang J.
    Wang Y.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2022, 30 (09): : 1123 - 1138
  • [33] Infrared and visible image fusion based on VPDE model and VGG network
    Luo, Donghua
    Liu, Gang
    Bavirisetti, Durga Prasad
    Cao, Yisheng
    APPLIED INTELLIGENCE, 2023, 53 (21) : 24739 - 24764
  • [34] RADFNet: An infrared and visible image fusion framework based on distributed network
    Feng, Siling
    Wu, Can
    Lin, Cong
    Huang, Mengxing
    FRONTIERS IN PLANT SCIENCE, 2023, 13
  • [35] Infrared and visible image fusion based on VPDE model and VGG network
    Donghua Luo
    Gang Liu
    Durga Prasad Bavirisetti
    Yisheng Cao
    Applied Intelligence, 2023, 53 : 24739 - 24764
  • [36] Fully convolutional network-based infrared and visible image fusion
    Feng, Yufang
    Lu, Houqing
    Bai, Jingbo
    Cao, Lin
    Yin, Hong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) : 15001 - 15014
  • [37] Infrared and Visible Image Fusion Based on Significant Matrix and Neural Network
    Shen Yu
    Chen Xiaopeng
    Yuan Yubin
    Wang Lin
    Zhang Hongguo
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (20)
  • [38] Fully convolutional network-based infrared and visible image fusion
    Yufang Feng
    Houqing Lu
    Jingbo Bai
    Lin Cao
    Hong Yin
    Multimedia Tools and Applications, 2020, 79 : 15001 - 15014
  • [39] Attention based dual UNET network for infrared and visible image fusion
    Wang, Xuejiao
    Hua, Zhen
    Li, Jinjiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (25) : 66959 - 66980
  • [40] MSFNet: MultiStage Fusion Network for infrared and visible image fusion
    Wang, Chenwu
    Wu, Junsheng
    Zhu, Zhixiang
    Chen, Hao
    NEUROCOMPUTING, 2022, 507 : 26 - 39