Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss

被引:11
|
作者
Xu, Dongdong [1 ]
Wang, Yongcheng [1 ]
Zhang, Xin [1 ,2 ]
Zhang, Ning [1 ,2 ]
Yu, Sibo [1 ,2 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun 130033, Peoples R China
[2] Univ Chinese Acad Sci, Coll Mat Sci & Optoelect Technol, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Feature extraction; Training; Convolution; Kernel; Data mining; Deep learning; Infrared and visible images; deep learning; unsupervised image fusion; densely connected convolutional network; perceptual loss; MULTISCALE-DECOMPOSITION; PERFORMANCE;
D O I
10.1109/ACCESS.2020.3037770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The fusion of infrared and visible images can utilize the indication characteristics and the textural details of source images to realize the all-weather detection. The deep learning (DL) based fusion solutions can reduce the computational cost and complexity compared with traditional methods since there is no need to design complex feature extraction methods and fusion rules. There are no standard reference images and the publicly available infrared and visible image pairs are scarce. Most supervised DL-based solutions have to take pre-training on other labeled large datasets which may not behave well when testing. The few unsupervised fusion methods can hardly obtain ideal images with good visual impression. In this paper, an infrared and visible image fusion method based on unsupervised convolutional neural network is proposed. When designing the network structure, densely connected convolutional network (DenseNet) is used as the sub-network for feature extraction and reconstruction to ensure that more information of source images can be retained in the fusion images. As to loss function, the perceptual loss is creatively introduced and combined with the structure similarity loss to constrain the updating of weight parameters during the back propagation. The perceptual loss designed helps to improve the visual information fidelity (VIF) of the fusion image effectively. Experimental results show that this method can obtain fusion images with prominent targets and obvious details. Compared with other 7 traditional and deep learning methods, the fusion results of this method are better on objective evaluation and visual observation when taken together.
引用
收藏
页码:206445 / 206458
页数:14
相关论文
共 50 条
  • [41] Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review
    Sun, Changqi
    Zhang, Cong
    Xiong, Naixue
    ELECTRONICS, 2020, 9 (12) : 1 - 24
  • [42] Infrared and visible image fusion with deep wavelet-dense network
    Chen, Yanling
    Cheng, Lianglun
    Wu, Heng
    Chen, Ziyang
    LI, Feng
    OPTICA APPLICATA, 2023, 53 (01) : 49 - 64
  • [43] Infrared and visible image fusion using total variation model
    Ma, Yong
    Chen, Jun
    Chen, Chen
    Fan, Fan
    Ma, Jiayi
    NEUROCOMPUTING, 2016, 202 : 12 - 19
  • [44] Adjustable Visible and Infrared Image Fusion
    Wu, Boxiong
    Nie, Jiangtao
    Wei, Wei
    Zhang, Lei
    Zhang, Yanning
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (12) : 13463 - 13477
  • [45] Fusion of Visible and Infrared Image Using Adaptive Tetrolet Transform
    Liu, Kaifeng
    Yuan, Baohong
    Zhang, Dexiang
    Zhang, Jingjing
    PROCEEDINGS OF THE 2015 4TH INTERNATIONAL CONFERENCE ON COMPUTER, MECHATRONICS, CONTROL AND ELECTRONIC ENGINEERING (ICCMCEE 2015), 2015, 37 : 814 - 818
  • [46] RESTORABLE VISIBLE AND INFRARED IMAGE FUSION
    Kang, Jihun
    Horita, Daichi
    Tsubota, Koki
    Aizawa, Kiyoharu
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1560 - 1564
  • [47] Infrared and visible image fusion using structure-transferring fusion method
    Kong, Xiangyu
    Liu, Lei
    Qian, Yunsheng
    Wang, Yan
    INFRARED PHYSICS & TECHNOLOGY, 2019, 98 : 161 - 173
  • [48] Infrared and visible image fusion based on residual dense network and gradient loss
    Li, Jiawei
    Liu, Jinyuan
    Zhou, Shihua
    Zhang, Qiang
    Kasabov, Nikola K.
    INFRARED PHYSICS & TECHNOLOGY, 2023, 128
  • [49] A multi-scale information integration framework for infrared and visible image fusion
    Yang, Guang
    Li, Jie
    Lei, Hanxiao
    Gao, Xinbo
    NEUROCOMPUTING, 2024, 600
  • [50] Infrared and Visible Image Fusion Method Based on Information Enhancement and Mask Loss
    Zhang, Xiaodong
    Wang, Shuo
    Gao, Shaoshu
    Wang, Xinrui
    Zhang, Long
    Guangzi Xuebao/Acta Photonica Sinica, 2024, 53 (09):