Infrared and visible image fusion based on residual dense network and gradient loss

被引:14
|
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [1 ,3 ]
Kasabov, Nikola K. [4 ,5 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1010, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Londonderry BT52 1SA, North Ireland
基金
中国国家自然科学基金;
关键词
Image fusion; Unsupervised learning; End-to-end model; Infrared image; Visible image; MULTI-FOCUS; TRANSFORM;
D O I
10.1016/j.infrared.2022.104486
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Infrared and Visible Image Fusion Based on Dual Channel Residual Dense Network
    Feng Xin
    Yang Jieming
    Zhang Hongde
    Qiu Guohang
    [J]. ACTA PHOTONICA SINICA, 2023, 52 (11)
  • [2] RXDNFuse: A aggregated residual dense network for infrared and visible image fusion
    Long, Yongzhi
    Jia, Haitao
    Zhong, Yida
    Jiang, Yadong
    Jia, Yuming
    [J]. INFORMATION FUSION, 2021, 69 : 128 - 141
  • [3] Infrared and Visible Image Fusion Based on Residual Dense Block and Auto-Encoder Network
    Wang, Jianzhong
    Xu, Haonan
    Wang, Hongfeng
    Yu, Zibo
    [J]. Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2021, 41 (10): : 1077 - 1083
  • [4] Infrared and visible image fusion with improved residual dense generative adversarial network
    Min, Li
    Cao, Si-Jian
    Zhao, Huai-Ci
    Liu, Peng-Fei
    Tai, Bing-Chang
    [J]. Kongzhi yu Juece/Control and Decision, 2023, 38 (03): : 721 - 728
  • [5] SCGRFuse: An infrared and visible image fusion network based on spatial/channel attention mechanism and gradient aggregation residual dense blocks
    Wang, Yong
    Pu, Jianfei
    Miao, Duoqian
    Zhang, L.
    Zhang, Lulu
    Du, Xin
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 132
  • [6] Infrared and visible image fusion based on dilated residual attention network
    Mustafa, Hafiz Tayyab
    Yang, Jie
    Mustafa, Hamza
    Zareapoor, Masoumeh
    [J]. OPTIK, 2020, 224
  • [7] LMDFusion: A lightweight infrared and visible image fusion network for substation equipment based on mask and residual dense connection
    Hao, Chi
    Delin, Luo
    Song, Wang
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2024, 138
  • [8] FusionGRAM: An Infrared and Visible Image Fusion Framework Based on Gradient Residual and Attention Mechanism
    Wang, Jinxin
    Xi, Xiaoli
    Li, Dongmei
    Li, Fang
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [9] RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition
    Yu, Jia
    Lu, Gehao
    Zhang, Jie
    [J]. ELECTRONICS, 2024, 13 (14)
  • [10] Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network
    Xu, Dongdong
    Wang, Yongcheng
    Xu, Shuyan
    Zhu, Kaiguang
    Zhang, Ning
    Zhang, Xin
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (02):