A novel 3D shape recognition method based on double-channel attention residual network

被引:6
|
作者
Ma, Ziping [1 ]
Zhou, Jie [2 ]
Ma, Jinlin [2 ]
Li, Tingting [2 ]
机构
[1] North Minzu Univ, Coll Math & Informat Sci, Yinchuan 750021, Ningxia, Peoples R China
[2] North Minzu Univ, Coll Comp Sci & Engn, Yinchuan 750021, Ningxia, Peoples R China
基金
中国国家自然科学基金;
关键词
3D shape recognition; Residual; Multi-head self-attention; Weighted loss function; CONVOLUTIONAL NEURAL-NETWORK; POINT CLOUD; RETRIEVAL; CLASSIFICATION;
D O I
10.1007/s11042-022-12041-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning 3D features by deep networks has achieved a successful performance up to now. However, data imbalance and low-resolution voxels still remain and influence the performance of 3D shape recognition. To resolve these issues, we propose double-channel attention residual network (double-RVCNN) as a novel deep network model with residual structure based on multi-head self-attention mechanism. Double-channel structure adopts double channels to input data including voxels and 3D Radon feature matrices, aiming to fully utilize the local and global features. The multi-head self-attention mechanism can integrate the relatively important contents of the input data through multiple heads structure, which can enrich the information processing ability and stabilize the training process of our network. Residual structure with cross-entropy loss and center loss as weighted loss function can avoid information loss to a great extent. Experimental results show that the values of mean average precision (MAP) are 83.31% and 74.04%, the values of classification accuracy are 90.53% and 85.09% on ModelNet10 and ModelNet40 datasets respectively, which demonstrates that our method performs a better 3D shape recognition accuracy than compared methods on test datasets.
引用
收藏
页码:32519 / 32548
页数:30
相关论文
共 50 条
  • [1] A novel 3D shape recognition method based on double-channel attention residual network
    Ziping Ma
    Jie Zhou
    Jinlin Ma
    Tingting Li
    Multimedia Tools and Applications, 2022, 81 : 32519 - 32548
  • [2] 3D design method of double-channel impeller of sewage pump
    Zhang, Jing
    Qi, Xueyi
    Ji, Hong
    Yang, Guolai
    Hou, Yihua
    Nongye Jixie Xuebao/Transactions of the Chinese Society of Agricultural Machinery, 2008, 39 (10): : 65 - 70
  • [3] A stroke image recognition model based on 3D residual network and attention mechanism
    Hou, Yingan
    Su, Junguang
    Liang, Jun
    Chen, Xiwen
    Liu, Qin
    Deng, Liang
    Liao, Jiyuan
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 43 (04) : 5205 - 5214
  • [4] Separable 3D residual attention network for human action recognition
    Zhang, Zufan
    Peng, Yue
    Gan, Chenquan
    Abate, Andrea Francesco
    Zhu, Lianxiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (04) : 5435 - 5453
  • [5] Separable 3D residual attention network for human action recognition
    Zufan Zhang
    Yue Peng
    Chenquan Gan
    Andrea Francesco Abate
    Lianxiang Zhu
    Multimedia Tools and Applications, 2023, 82 : 5435 - 5453
  • [6] RJAN: Region-based joint attention network for 3D shape recognition
    Zhao, Yue
    Nie, Weizhi
    Nie, Jie
    Zhang, Yuyi
    Wang, Bo
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024,
  • [7] RJAN: Region-based joint attention network for 3D shape recognition
    Hangzhou Institute of Technology, Xidian University, Hangzhou, China
    不详
    不详
    不详
    不详
    CAAI Trans. Intell. Technol., 2468,
  • [8] SVHAN: Sequential View Based Hierarchical Attention Network for 3D Shape Recognition
    Zhao, Yue
    Nie, Weizhi
    Liu, An-An
    Gao, Zan
    Su, Yuting
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2130 - 2138
  • [9] DAN: Deep-Attention Network for 3D Shape Recognition
    Nie, Weizhi
    Zhao, Yue
    Song, Dan
    Gao, Yue
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4371 - 4383
  • [10] A Novel Attention Residual Network Expression Recognition Method
    Qi, Hui
    Zhang, Xipeng
    Shi, Ying
    Qi, Xiaobo
    IEEE ACCESS, 2024, 12 : 24609 - 24620