DaCFN: divide-and-conquer fusion network for RGB-T object detection

被引:0
|
作者
Bofan Wang
Haitao Zhao
Yi Zhuang
机构
[1] East China University of Science and Technology,School of Information Science and Engineering
关键词
Channel fusion module; RGB-thermal information; Two-stream structure; Object detection;
D O I
暂无
中图分类号
学科分类号
摘要
Thermal images could help visual images to improve object detection performance under low illumination. On the other hand, the complementary fusion of visual and thermal features can be challenging. In RGB-T object detection, the two-stream network structure has been widely used, in which addition operation and concatenation operation are utilized to merge feature maps. However, the addition compacts two-stream feature with inevitable distortion, while direct concatenation may bring redundancy to features. In this paper, we show that the addition operation is more suitable for common features from RGB and thermal, while the concatenation operation is more suitable for specific features unique to RGB or thermal. Then we take the divide-and-conquer strategy to propose an RGB-T detector named Divide-and-Conquer Fusion Network (DaCFN), which divides RGB and thermal features into common and specific ones and applies category-customized operations to them. Specifically, we design the Partial Coupling Net Block (PCNB), in which common features are extracted by coupled parameters and specific features by independent ones. Then the Selective Common Addition (SCA) and the Independent Specific Concatenation (ISC) are designed to fuse common and specific features, respectively. Experiments on FLIR and KAIST datasets demonstrate that our approach achieves high accuracy with high speed against other state-of-the-art RGB-T detectors.
引用
收藏
页码:2407 / 2420
页数:13
相关论文
共 50 条
  • [1] DaCFN: divide-and-conquer fusion network for RGB-T object detection
    Wang, Bofan
    Zhao, Haitao
    Zhuang, Yi
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (07) : 2407 - 2420
  • [2] A Feature Divide-and-Conquer Network for RGB-T Semantic Segmentation
    Zhao, Shenlu
    Zhang, Qiang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (06) : 2892 - 2905
  • [3] Modal complementary fusion network for RGB-T salient object detection
    Ma, Shuai
    Song, Kechen
    Dong, Hongwen
    Tian, Hongkun
    Yan, Yunhui
    [J]. APPLIED INTELLIGENCE, 2023, 53 (08) : 9038 - 9055
  • [4] Modal complementary fusion network for RGB-T salient object detection
    Shuai Ma
    Kechen Song
    Hongwen Dong
    Hongkun Tian
    Yunhui Yan
    [J]. Applied Intelligence, 2023, 53 : 9038 - 9055
  • [5] Weighted Guided Optional Fusion Network for RGB-T Salient Object Detection
    Wang, Jie
    Li, Guoqiang
    Shi, Jie
    Xi, Jinwen
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [6] Divide-and-Conquer Fusion
    Chan, Ryan S. Y.
    Pollock, Murray
    Johansen, Adam M.
    Roberts, Gareth O.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [7] FEATURE ENHANCEMENT AND FUSION FOR RGB-T SALIENT OBJECT DETECTION
    Sun, Fengming
    Zhang, Kang
    Yuan, Xia
    Zhao, Chunxia
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1300 - 1304
  • [8] Revisiting Feature Fusion for RGB-T Salient Object Detection
    Zhang, Qiang
    Xiao, Tonglin
    Huang, Nianchang
    Zhang, Dingwen
    Han, Jungong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (05) : 1804 - 1818
  • [9] CGFNet: Cross-Guided Fusion Network for RGB-T Salient Object Detection
    Wang, Jie
    Song, Kechen
    Bao, Yanqi
    Huang, Liming
    Yan, Yunhui
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 2949 - 2961
  • [10] Edge-guided feature fusion network for RGB-T salient object detection
    Chen, Yuanlin
    Sun, Zengbao
    Yan, Cheng
    Zhao, Ming
    [J]. Frontiers in Neurorobotics, 2024, 18