VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion

被引:152
|
作者
Hou, Ruichao [1 ]
Zhou, Dongming [2 ]
Nie, Rencan [2 ]
Liu, Dong [2 ]
Xiong, Lei [3 ]
Guo, Yanbu [2 ]
Yu, Chuanbo [4 ]
机构
[1] Nanjing Univ, Dept Comp Sci & Technol, Nanjing 210023, Peoples R China
[2] Yunnan Univ, Sch Informat, Kunming 650504, Yunnan, Peoples R China
[3] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[4] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
Unsupervised learning; image fusion; convolutional neural networks; infrared images; visible images; CONTOURLET TRANSFORM; PERFORMANCE;
D O I
10.1109/TCI.2020.2965304
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Visible images provide abundant texture details and environmental information, while infrared images benefit from night-time visibility and suppression of highly dynamic regions; it is a meaningful task to fuse these two types of features from different sensors to generate an informative image. In this article, we propose an unsupervised end-to-end learning framework for infrared and visible image fusion. We first construct enough benchmark training datasets using the visible and infrared frames, which can address the limitation of the training dataset. Additionally, due to the lack of labeled datasets, our architecture is derived from a robust mixed loss function that consists of the modified structural similarity (M-SSIM) metric and the total variation (TV) by designing an unsupervised learning process that can adaptively fuse thermal radiation and texture details and suppress noise interference. In addition, our method is an end to end model, which avoids setting hand-crafted fusion rules and reducing computational cost. Furthermore, extensive experimental results demonstrate that the proposed architecture performs better than state-of-the-art methods in both subjective and objective evaluations.
引用
收藏
页码:640 / 651
页数:12
相关论文
共 50 条
  • [1] Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss
    Xu, Dongdong
    Wang, Yongcheng
    Zhang, Xin
    Zhang, Ning
    Yu, Sibo
    IEEE ACCESS, 2020, 8 : 206445 - 206458
  • [2] VIF-Net: Interface completion in full waveform inversion using fusion networks
    Deng, Zixuan
    Xu, Qiong
    Min, Fan
    Xiang, Yanping
    Computers and Geosciences, 2025, 196
  • [3] Unsupervised densely attention network for infrared and visible image fusion
    Li, Yang
    Wang, Jixiao
    Miao, Zhuang
    Wang, Jiabao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34685 - 34696
  • [4] Unsupervised densely attention network for infrared and visible image fusion
    Yang Li
    Jixiao Wang
    Zhuang Miao
    Jiabao Wang
    Multimedia Tools and Applications, 2020, 79 : 34685 - 34696
  • [5] Unsupervised Infrared Image and Visible Image Fusion Algorithm Based on Deep Learning
    Chen Guoyang
    Wu Xiaojun
    Xu Tianyang
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (04)
  • [6] Unsupervised Infrared and Visible Image Fusion with Pixel Self-attention
    Cui, Saijia
    Zhou, Zhiqiang
    Li, Linhao
    Fei, Erfang
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 437 - 441
  • [7] A multi-weight fusion framework for infrared and visible image fusion
    Zhou, Yiqiao
    He, Kangjian
    Xu, Dan
    Shi, Hongzhen
    Zhang, Hao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (27) : 68931 - 68957
  • [8] Infrared and Visible Image Fusion using a Deep Learning Framework
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2705 - 2710
  • [10] CLF-Net: Contrastive Learning for Infrared and Visible Image Fusion Network
    Zhu, Zhengjie
    Yang, Xiaogang
    Lu, Ruitao
    Shen, Tong
    Xie, Xueli
    Zhang, Tao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71