SFCFusion: Spatial-Frequency Collaborative Infrared and Visible Image Fusion

被引:6
|
作者
Chen, Hanrui [1 ,2 ,3 ]
Deng, Lei [1 ,2 ,3 ]
Chen, Zhixiang [4 ]
Liu, Chenhua [1 ,2 ,3 ]
Zhu, Lianqing [1 ,2 ,3 ]
Dong, Mingli [1 ,2 ,3 ]
Lu, Xitian [1 ,2 ,3 ]
Guo, Chentong [1 ,2 ,3 ]
机构
[1] Beijing Informat Sci & Technol Univ, Minist Educ Optoelect Measurement Technol & Instru, Key Lab, Beijing 100192, Peoples R China
[2] Beijing Informat Sci & Technol Univ, Beijing Lab Opt Fiber Sensing & Syst, Beijing 100192, Peoples R China
[3] Guangzhou Nansha Intelligent Photon Sensing Res In, Guangzhou 511462, Guangdong, Peoples R China
[4] Univ Sheffield, Dept Comp Sci, Sheffield S1 4DP, England
关键词
Deep learning; image fusion; multiscale transformation (MST); spatial-frequency; visible-infrared image; NONSUBSAMPLED CONTOURLET TRANSFORM; WAVELET; NETWORK;
D O I
10.1109/TIM.2024.3370752
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Infrared images can provide prominent targets based on the radiation difference, making them suitable for use in all day and night conditions. On the other hand, visible images can offer texture details with high spatial resolution. Infrared and visible image fusion is promising to achieve the best of both. Conventional frequency or spatial multiscale transformation (MST) methods are good at preserving image details. Deep-learning-based methods become more and more popular in image fusion because they can preserve high-level semantic features. To tackle the challenge in extracting and fusing cross-modality and cross-domain information, we propose a spatial-frequency collaborative fusion (SFCFusion) framework that effectively fuses spatial and frequency information in the feature space. In the frequency domain, source images are decomposed into base and detail layers with existing frequency decomposition methods. In the spatial domain, a kernel-based saliency generation module is designed to preserve spatial region-level structural information. A deep-learning-based encoder is used to extract features from the source images, decomposed images, and saliency maps. In the shared feature space, we achieve cross-modality SFCFusion through our proposed adaptive fusion scheme. We have conducted experiments to compare our SFCFusion with both the conventional and deep learning approaches on the TNO, LLVIP, and M3FD datasets. The qualitative and quantitative evaluation results demonstrate the effectiveness of our SFCFusion. We have further demonstrated the superiority of our SFCFusion in the downstream detection task. Our code will be available at https://github.com/ChenHanrui430/SFCFusion.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [41] Infrared and visible image fusion for face recognition
    Singh, S
    Gyaourova, A
    Bebis, G
    Pavlidis, L
    BIOMETRIC TECHNOLOGY FOR HUMAN IDENTIFICATION, 2004, 5404 : 585 - 596
  • [42] Infrared and Visible Image Fusion in Realistic Streetscape
    Huang, Yudong
    Xu, Wei
    Tan, Hanlin
    Long, Xin
    Ben, Zongcheng
    PROCEEDINGS OF ICRCA 2018: 2018 THE 3RD INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION / ICRMV 2018: 2018 THE 3RD INTERNATIONAL CONFERENCE ON ROBOTICS AND MACHINE VISION, 2018, : 173 - 179
  • [43] Reflectance estimation for infrared and visible image fusion
    Gu, Yan
    Yang, Feng
    Zhao, Weijun
    Guo, Yiliang
    Min, Chaobo
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (08): : 2749 - 2763
  • [44] Semantic Guided Infrared and Visible Image Fusion
    Wu, Wei
    Zhang, Dazhi
    Hou, Jilei
    Wang, Yu
    Lu, Tao
    Zhou, Huabing
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2021, E104A (12) : 1733 - 1738
  • [45] VIFB: A Visible and Infrared Image Fusion Benchmark
    Zhang, Xingchen
    Ye, Ping
    Xiao, Gang
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 468 - 478
  • [46] DCFusion: A Dual-Frequency Cross-Enhanced Fusion Network for Infrared and Visible Image Fusion
    Wu, Dan
    Han, Mina
    Yang, Yang
    Zhao, Shan
    Rao, Yujing
    Li, Hao
    Lin, Xing
    Zhou, Chengjiang
    Bai, Haicheng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [47] Denoiser Learning for Infrared and Visible Image Fusion
    Liu, Jinyang
    Li, Shutao
    Tan, Lishan
    Dian, Renwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [48] INTEROCULAR DIFFERENCES IN CONTRAST AND SPATIAL-FREQUENCY - EFFECTS ON STEREOPSIS AND FUSION
    SCHOR, C
    HECKMANN, T
    VISION RESEARCH, 1989, 29 (07) : 837 - &
  • [49] Infrared and visible image fusion via mixed-frequency hierarchical guided learning
    Zhang, Pengjun
    Jin, Wei
    Gong, Zhaohui
    Zhang, Zejian
    Wu, Zhiwei
    INFRARED PHYSICS & TECHNOLOGY, 2023, 135
  • [50] MSFNet: MultiStage Fusion Network for infrared and visible image fusion
    Wang, Chenwu
    Wu, Junsheng
    Zhu, Zhixiang
    Chen, Hao
    NEUROCOMPUTING, 2022, 507 : 26 - 39