Hierarchical Graph Attention Based Multi-View Convolutional Neural Network for 3D Object Recognition

被引:6
|
作者
Zeng, Hui [1 ,2 ]
Zhao, Tianmeng [1 ]
Cheng, Ruting [1 ]
Wang, Fuzhou [1 ]
Liu, Jiwei [1 ,2 ]
机构
[1] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing Engn Res Ctr Ind Spectrum Imaging, Beijing 100083, Peoples R China
[2] Univ Sci & Technol Beijing, Shunde Grad Sch, Foshan 528399, Peoples R China
来源
IEEE ACCESS | 2021年 / 9卷 / 09期
基金
中国国家自然科学基金;
关键词
Three-dimensional displays; Object recognition; Two dimensional displays; Neural networks; Feature extraction; Solid modeling; Convolutional neural networks; 3D object recognition; multi-view convolutional neural network; graph attention network; feature aggregation; CLASSIFICATION;
D O I
10.1109/ACCESS.2021.3059853
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For multi-view convolutional neural network based 3D object recognition, how to fuse the information of multiple views is a key factor affecting the recognition performance. Most traditional methods use max-pooling algorithm to obtain the final 3D object feature, which does not take into account the correlative information between different views. To make full use of the effective information of multiple views, this paper introduces the hierarchical graph attention based multi-view convolutional neural network for 3D object recognition. At first, the view selection module is proposed to reduce redundant view information in multiple views, which can select the projective views with more effective information. Then, the correlation weighted feature aggregation module is proposed to better fuse multiple view features. Finally, the hierarchical feature aggregation network structure is designed to further to make full use of the correlation information of multiple views. Extensive experimental results have validated the effectiveness of the proposed method.
引用
收藏
页码:33323 / 33335
页数:13
相关论文
共 50 条
  • [41] Multi-view ensemble manifold regularization for 3D object recognition
    Hong, Chaoqun
    Yu, Jun
    You, Jane
    Chen, Xuhui
    Tao, Dapeng
    INFORMATION SCIENCES, 2015, 320 : 395 - 405
  • [42] MVContrast: Unsupervised Pretraining for Multi-view 3D Object Recognition
    Luequan Wang
    Hongbin Xu
    Wenxiong Kang
    Machine Intelligence Research, 2023, 20 : 872 - 883
  • [43] MVContrast: Unsupervised Pretraining for Multi-view 3D Object Recognition
    Wang, Luequan
    Xu, Hongbin
    Kang, Wenxiong
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (06) : 872 - 883
  • [44] Deep models for multi-view 3D object recognition: a review
    Alzahrani, Mona
    Usman, Muhammad
    Jarraya, Salma Kammoun
    Anwar, Saeed
    Helmy, Tarek
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (12)
  • [45] CFMVOR: Federated Multi-view 3D Object Recognition Based on Compressed Learning
    Xiao, Di
    Zhang, Meng
    Zhang, Maolan
    Chen, Lvjun
    PATTERN RECOGNITION AND COMPUTER VISION, PT XIII, PRCV 2024, 2025, 15043 : 280 - 293
  • [46] Impression Estimation Model of 3D Objects Using Multi-View Convolutional Neural Network
    Sakashita, Keisuke
    Tobitani, Kensuke
    Taguchi, Koichi
    Hashimoto, Manabu
    Tani, Iori
    Hashimoto, Sho
    Katahira, Kenji
    Nagata, Noriko
    FRONTIERS OF COMPUTER VISION (IW-FCV 2022), 2022, 1578 : 343 - 355
  • [47] Review of multi-view 3D object recognition methods based on deep learning
    Qi, Shaohua
    Ning, Xin
    Yang, Guowei
    Zhang, Liping
    Long, Peng
    Cai, Weiwei
    Li, Weijun
    DISPLAYS, 2021, 69
  • [48] Principal views selection based on growing graph convolution network for multi-view 3D model recognition
    Qi Liang
    Qiang Li
    Weizhi Nie
    Yuting Su
    Applied Intelligence, 2023, 53 : 5320 - 5336
  • [49] Principal views selection based on growing graph convolution network for multi-view 3D model recognition
    Liang, Qi
    Li, Qiang
    Nie, Weizhi
    Su, Yuting
    APPLIED INTELLIGENCE, 2023, 53 (05) : 5320 - 5336
  • [50] Multi-view based neural network for semantic segmentation on 3D scenes
    Yonghua LU
    Mingmin ZHEN
    Tian FANG
    ScienceChina(InformationSciences), 2019, 62 (12) : 248 - 250