Infrared and visible image fusion based on residual dense network and gradient loss

被引:14
|
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [1 ,3 ]
Kasabov, Nikola K. [4 ,5 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1010, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Londonderry BT52 1SA, North Ireland
基金
中国国家自然科学基金;
关键词
Image fusion; Unsupervised learning; End-to-end model; Infrared image; Visible image; MULTI-FOCUS; TRANSFORM;
D O I
10.1016/j.infrared.2022.104486
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Infrared and Visible Image Fusion Based on Gradient Transfer Optimization Model
    Yu, Ruixing
    Chen, Weiyu
    Zhou, Daming
    IEEE ACCESS, 2020, 8 : 50091 - 50106
  • [22] Infrared and visible image fusion based on double fluid pyramids and multi-scale gradient residual block
    Pang, Shan
    Huo, Hongtao
    Yang, Xin
    Li, Jing
    Liu, Xiaowen
    INFRARED PHYSICS & TECHNOLOGY, 2023, 131
  • [23] Infrared and visible image fusion based on global context network
    Li, Yonghong
    Shi, Yu
    Pu, Xingcheng
    Zhang, Suqiang
    Journal of Electronic Imaging, 2024, 33 (05)
  • [24] MGRCFusion: An infrared and visible image fusion network based on multi-scale group residual convolution
    Zhu, Pan
    Yin, Yufei
    Zhou, Xinglin
    OPTICS AND LASER TECHNOLOGY, 2025, 180
  • [25] Infrared and Visible Image Fusion Based on Co-gradient Edge-attention Gate Network
    Wang, Jie
    Li, Xuan
    Chen, Rongfu
    Zhang, Guomin
    Feng, Zhaoming
    Ding, Yifan
    2024 9TH INTERNATIONAL CONFERENCE ON CONTROL AND ROBOTICS ENGINEERING, ICCRE 2024, 2024, : 339 - 344
  • [26] Infrared and visible image fusion based on particle swarm optimization and dense block
    Zhang, Jing
    Tang, Bingjin
    Hu, Shuai
    FRONTIERS IN ENERGY RESEARCH, 2022, 10
  • [27] DTFusion: Infrared and Visible Image Fusion Based on Dense Residual PConv-ConvNeXt and Texture-Contrast Compensation
    Zhou, Xinzhi
    He, Min
    Zhou, Dongming
    Xu, Feifei
    Jeon, Seunggil
    SENSORS, 2024, 24 (01)
  • [28] SDRSwin: A Residual Swin Transformer Network with Saliency Detection for Infrared and Visible Image Fusion
    Li, Shengshi
    Wang, Guanjun
    Zhang, Hui
    Zou, Yonghua
    REMOTE SENSING, 2023, 15 (18)
  • [29] Maritime Infrared and Visible Image Fusion Based on Refined Features Fusion and Sobel Loss
    Gao, Zongjiang
    Zhu, Feixiang
    Chen, Haili
    Ma, Baoshan
    PHOTONICS, 2022, 9 (08)
  • [30] Infrared and visible image fusion algorithm based on split⁃attention residual networks
    Qian K.
    Li T.
    Li Z.
    Chen M.
    Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University, 2022, 40 (06): : 1404 - 1413