Cross-Modal Transformer for RGB-D semantic segmentation of production workshop objects

被引:5
|
作者
Ru, Qingjun [1 ]
Chen, Guangzhu [1 ]
Zuo, Tingyu [1 ]
Liao, Xiaojuan [1 ]
机构
[1] Chengdu Univ Technol, Coll Comp Sci & Cyber Secur, Chengdu, Peoples R China
关键词
Cross-Modal; Production workshop object; RGB-D; Semantic segmentation; Transformer;
D O I
10.1016/j.patcog.2023.109862
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Scene understanding in a production workshop is an important technology to improve its intelligence level, semantic segmentation of production workshop objects is an effective method for realizing scene understanding. Since the varieties of information of production workshop, making full use of the complementary information of RGB image and depth image can effectively improve the semantic segmentation accuracy of production workshop objects. Aiming at solving the multi-scale and real-time problems of segmenting the production workshop objects, this paper proposes Cross-Modal Transformer (CMFormer), a Transformer-based cross-modal semantic segmentation model. Its key feature correction and feature fusion parts are composed of the Multi-Scale Channel Attention Correction(MS-CAC) module and the Global Feature Aggregation(GFA) module. By improving Multi Head Self-Attention(MHSA) in Transformer, we design Cross-Modal Multi-Head Self-Attention(CM-MHSA) to build long-range interaction between RGB image and depth image, and further design the MS-CAC module and the GFA module on the basis of the CM-MHSA module, to achieve cross-modal information interaction in the channel and spatial dimensions. Among them, the MS-CAC module enriches the multi-scale features of each channel and achieve more accurate channel attention correction between the two modals; the GFA module interacts with RGB feature and depth feature in the spatial dimension and fuses global and local features at the same time. In the experiments on the NYU Depth v2 dataset, the CMFormer reached 68.00% MPA(Mean Pixel Accuracy) and 55.75% mIoU(Mean Intersection over Union), achieves the state-of-the-art results. While in the experiments on the Scene Objects for Production workshop dataset(SOP), the CMFormer achieves 96.74% MPA, 92.98% mIoU and 43 FPS(Frames Per Second), which has high precision and good real-time performance. Code is available at: https://github.com/FutureIAI/CMFormer
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Cross-Modal Fusion and Progressive Decoding Network for RGB-D Salient Object Detection
    Hu, Xihang
    Sun, Fuming
    Sun, Jing
    Wang, Fasheng
    Li, Haojie
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (08) : 3067 - 3085
  • [32] SYRER: Synergistic Relational Reasoning for RGB-D Cross-Modal Re-Identification
    Liu, Hao
    Wu, Jingjing
    Li, Feng
    Jiang, Jianguo
    Hong, Richang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5600 - 5614
  • [33] Depth Enhanced Cross-Modal Cascaded Network for RGB-D Salient Object Detection
    Zhao, Zhengyun
    Huang, Ziqing
    Chai, Xiuli
    Wang, Jun
    NEURAL PROCESSING LETTERS, 2023, 55 (01) : 361 - 384
  • [34] A cross-modal edge-guided salient object detection for RGB-D image
    Liu, Zhengyi
    Wang, Kaixun
    Dong, Hao
    Wang, Yuan
    NEUROCOMPUTING, 2021, 454 : 168 - 177
  • [35] Depth Enhanced Cross-Modal Cascaded Network for RGB-D Salient Object Detection
    Zhengyun Zhao
    Ziqing Huang
    Xiuli Chai
    Jun Wang
    Neural Processing Letters, 2023, 55 : 361 - 384
  • [36] CMDCF: an effective cross-modal dense cooperative fusion network for RGB-D SOD
    Jia X.
    Zhao W.
    Wang Y.
    DongYe C.
    Peng Y.
    Neural Computing and Applications, 2024, 36 (23) : 14361 - 14378
  • [37] Segmentation of Objects in RGB-D Scenes by Clustering Surfaces
    Yalic, Hamdi Yalin
    Can, Ahmet Burak
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [38] Semantic Guidance Fusion Network for Cross-Modal Semantic Segmentation
    Zhang, Pan
    Chen, Ming
    Gao, Meng
    SENSORS, 2024, 24 (08)
  • [39] Cross-modal semantic transfer for point cloud semantic segmentation
    Cao, Zhen
    Mi, Xiaoxin
    Qiu, Bo
    Cao, Zhipeng
    Long, Chen
    Yan, Xinrui
    Zheng, Chao
    Dong, Zhen
    Yang, Bisheng
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2025, 221 : 265 - 279
  • [40] 2.5D CONVOLUTION FOR RGB-D SEMANTIC SEGMENTATION
    Xing, Yajie
    Wang, Jingbo
    Chen, Xiaokang
    Zeng, Gang
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1410 - 1414