MIPANet: optimizing RGB-D semantic segmentation through multi-modal interaction and pooling attention

被引:0
|
作者
Zhang, Shuai [1 ]
Xie, Minghong [1 ]
机构
[1] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming, Peoples R China
来源
FRONTIERS IN PHYSICS | 2024年 / 12卷
关键词
RGB-D semantic segmentation; attention mechanism; feature fusion; multi-modal interaction; feature enhancement; INFORMATION; FUSION;
D O I
10.3389/fphy.2024.1411559
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The semantic segmentation of RGB-D images involves understanding objects appearances and spatial relationships within a scene, which necessitates careful consideration of multiple factors. In indoor scenes, the presence of diverse and disorderly objects, coupled with illumination variations and the influence of adjacent objects, can easily result in misclassifications of pixels, consequently affecting the outcome of semantic segmentation. We propose a Multi-modal Interaction and Pooling Attention Network (MIPANet) in response to these challenges. This network is designed to exploit the interactive synergy between RGB and depth modalities, aiming to enhance the utilization of complementary information and improve segmentation accuracy. Specifically, we incorporate a Multi-modal Interaction Module (MIM) into the deepest layers of the network. This module is engineered to facilitate the fusion of RGB and depth information, allowing for mutual enhancement and correction. Moreover, we introduce a Pooling Attention Module (PAM) at various stages of the encoder to enhance the features extracted by the network. The outputs of the PAMs at different stages are selectively integrated into the decoder through a refinement module to improve semantic segmentation performance. Experimental results demonstrate that MIPANet outperforms existing methods on two indoor scene datasets, NYU-Depth V2 and SUN-RGBD, by optimizing the insufficient information interaction between different modalities in RGB-D semantic segmentation. The source codes are available at https://github.com/2295104718/MIPANet.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] MMPL-Net: multi-modal prototype learning for one-shot RGB-D segmentation
    Shan, Dexing
    Zhang, Yunzhou
    Liu, Xiaozheng
    Liu, Shitong
    Coleman, Sonya A.
    Kerr, Dermot
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10297 - 10310
  • [22] MMPL-Net: multi-modal prototype learning for one-shot RGB-D segmentation
    Dexing Shan
    Yunzhou Zhang
    Xiaozheng Liu
    Shitong Liu
    Sonya A. Coleman
    Dermot Kerr
    Neural Computing and Applications, 2023, 35 : 10297 - 10310
  • [23] Multi-modal deep feature learning for RGB-D object detection
    Xu, Xiangyang
    Li, Yuncheng
    Wu, Gangshan
    Luo, Jiebo
    PATTERN RECOGNITION, 2017, 72 : 300 - 313
  • [24] MULTI-MODAL FEATURE FUSION FOR ACTION RECOGNITION IN RGB-D SEQUENCES
    Shahroudy, Amir
    Wang, Gang
    Ng, Tian-Tsong
    2014 6TH INTERNATIONAL SYMPOSIUM ON COMMUNICATIONS, CONTROL AND SIGNAL PROCESSING (ISCCSP), 2014, : 73 - 76
  • [25] RGB-D Scene Classification via Multi-modal Feature Learning
    Ziyun Cai
    Ling Shao
    Cognitive Computation, 2019, 11 : 825 - 840
  • [26] RGB-D Scene Classification via Multi-modal Feature Learning
    Cai, Ziyun
    Shao, Ling
    COGNITIVE COMPUTATION, 2019, 11 (06) : 825 - 840
  • [27] Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling
    Wang, Anran
    Lu, Jiwen
    Wang, Gang
    Cai, Jianfei
    Cham, Tat-Jen
    COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 453 - 467
  • [28] RGB-D BASED MULTI-MODAL DEEP LEARNING FOR FACE IDENTIFICATION
    Lin, Tzu-Ying
    Chiu, Ching-Te
    Tang, Ching-Tung
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1668 - 1672
  • [29] Feature fusion and context interaction for RGB-D indoor semantic segmentation
    Liu, Heng
    Xie, Wen
    Wang, Shaoxun
    APPLIED SOFT COMPUTING, 2024, 167
  • [30] CDMANet: central difference mutual attention network for RGB-D semantic segmentation
    Ge, Mengjiao
    Su, Wen
    Gao, Jinfeng
    Jia, Guoqiang
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):