Infrared and visible image fusion via parallel scene and texture learning

被引:26
|
作者
Xu, Meilong [1 ]
Tang, Linfeng [1 ]
Zhang, Hao [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Image fusion; Infrared; Scene and texture learning; Recurrent neural network;
D O I
10.1016/j.patcog.2022.108929
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image fusion plays a pivotal role in numerous high-level computer vision tasks. Existing deep learning -based image fusion methods usually leverage an implicit manner to achieve feature extraction, which would cause some characteristics of source images, e.g., contrast and structural information, are unable to be fully extracted and integrated into the fused images. In this work, we propose an infrared and visible image fusion method via parallel scene and texture learning. Our key objective is to deploy two branches of deep neural networks, namely the content branch and detail branch, to synchronously extract different characteristics from source images and then reconstruct the fused image. The content branch fo-cuses primarily on coarse-grained information and is deployed to estimate the global content of source images. The detail branch primarily pays attention to fine-grained information, and we design an omni-directional spatially variant recurrent neural networks in this branch to model the internal structure of source images more accurately and extract texture-related features in an explicit manner. Extensive ex-periments show that our approach achieves significant improvements over state-of-the-arts on qualita-tive and quantitative evaluations with comparatively less running time consumption. Meanwhile, we also demonstrate the superiority of our fused results in the object detection task. Our code is available at: https://github.com/Melon-Xu/PSTLFusion .(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Infrared and visible image fusion via octave Gaussian pyramid framework
    Lei Yan
    Qun Hao
    Jie Cao
    Rizvi Saad
    Kun Li
    Zhengang Yan
    Zhimin Wu
    Scientific Reports, 11
  • [42] TFIV: Multigrained Token Fusion for Infrared and Visible Image via Transformer
    Li, Jing
    Yang, Bin
    Bai, Lu
    Dou, Hao
    Li, Chang
    Ma, Lingfei
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [43] DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer
    Tang, Wei
    He, Fazhi
    Liu, Yu
    Duan, Yansong
    Si, Tongzhen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (07) : 3159 - 3172
  • [44] SIEFusion: Infrared and Visible Image Fusion via Semantic Information Enhancement
    Lv, Guohua
    Song, Wenkuo
    Wei, Zhonghe
    Cheng, Jinyong
    Dong, Aimei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT III, 2024, 14427 : 176 - 187
  • [45] Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion
    Wang, Lei
    Hu, Ziming
    Kong, Quan
    Qi, Qian
    Liao, Qing
    ENTROPY, 2023, 25 (03)
  • [46] Infrared and Visible Image Fusion via Multiscale Receptive Field Amplification Fusion Network
    Ji, Chuanming
    Zhou, Wujie
    Lei, Jingsheng
    Ye, Lv
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 493 - 497
  • [47] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Gao, Li
    Luo, Delin
    Wang, Song
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2024, 67 (02) : 408 - 422
  • [48] DIVIDUAL: A Disentangled Visible And Infrared Image Fusion Contrastive Learning Method
    Yang, Shaoqi
    He, Dan
    Journal of Applied Science and Engineering, 2025, 28 (05): : 955 - 968
  • [49] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    GAO Li
    LUO DeLin
    WANG Song
    Science China Technological Sciences, 2024, 67 (02) : 408 - 422
  • [50] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Li Gao
    DeLin Luo
    Song Wang
    Science China Technological Sciences, 2024, 67 : 408 - 422