Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning

被引:1
|
作者
Liu, Yang [1 ]
Chen, Chen [2 ]
Wang, Can [3 ,4 ]
King, Xulin [5 ]
Liu, Mengyuan [6 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu, Peoples R China
[2] Univ Cent Florida, Ctr Res Comp Vis, Orlando, FL USA
[3] Univ Kiel, Dept Comp Sci, Lab Multimedia Informat Proc, Kiel, Germany
[4] Hangzhou Linxrobot Co, Hangzhou, Peoples R China
[5] Hangzhou GOTHEN Technol Co Ltd, Hangzhou, Peoples R China
[6] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
point clouds; masked point modeling; self-supervised learning; pre-training;
D O I
10.1145/3581783.3612106
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for both 2D and 3D computer vision. Nevertheless, existing MAE-based methods still have certain drawbacks. Firstly, the functional decoupling between the encoder and decoder is incomplete, which limits the encoder's representation learning ability. Secondly, downstream tasks solely utilize the encoder, failing to fully leverage the knowledge acquired through the encoder-decoder architecture in the pre-text task. In this paper, we propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning. The proposed method decouples functions between the decoder and the encoder by introducing a mask regressor, which predicts the masked patch representation from the visible patch representation encoded by the encoder and the decoder reconstructs the target from the predicted masked patch representation. By doing so, we minimize the impact of decoder updates on the representation space of the encoder. Moreover, we introduce an alignment constraint to ensure that the representations for masked patches, predicted from the encoded representations of visible patches, are aligned with the masked patch presentations computed from the encoder. To make full use of the knowledge learned in the pre-training stage, we design a new finetune mode for the proposed Point-RAE. Extensive experiments demonstrate that our approach is efficient during pre-training and generalizes well on various downstream tasks. Specifically, our pre-trained models achieve a high accuracy of 90.28% on the ScanObjectNN hardest split and 94.1% accuracy on ModelNet40, surpassing all the other self-supervised learning methods. Our code and pretrained model are public available at: https://github.com/liuyyy111/Point-RAE.
引用
收藏
页码:1738 / 1749
页数:12
相关论文
共 50 条
  • [31] DHGCN: Dynamic Hop Graph Convolution Network for Self-Supervised Point Cloud Learning
    Jiang, Jincen
    Zhao, Lizhi
    Lu, Xuequan
    Hu, Wei
    Razzak, Imran
    Wang, Meili
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 11, 2024, : 12883 - 12891
  • [32] SSL-Net: Point-Cloud Generation Network With Self-Supervised Learning
    Sun, Ran
    Gao, Yongbin
    Fang, Zhijun
    Wang, Anjie
    Zhong, Cengsi
    IEEE ACCESS, 2019, 7 : 82206 - 82217
  • [33] Self-supervised Adversarial Masking for 3D Point Cloud Representation Learning
    Szachniewicz, Michal
    Kozlowski, Wojciech
    Stypulkowski, Michal
    Zieba, Maciej
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, PT II, ACIIDS 2024, 2024, 14796 : 156 - 168
  • [34] Self-supervised variational autoencoder towards recommendation by nested contrastive learning
    Jing Wang
    Jun Wu
    Caiyan Jia
    Zhifei Zhang
    Applied Intelligence, 2023, 53 : 18887 - 18897
  • [35] Self-Supervised Learning Malware Traffic Classification Based on Masked Autoencoder
    Xu, Ke
    Zhang, Xixi
    Wang, Yu
    Ohtsuki, Tomoaki
    Adebisi, Bamidele
    Sari, Hikmet
    Gui, Guan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 17330 - 17340
  • [36] GMAEEG: A Self-Supervised Graph Masked Autoencoder for EEG Representation Learning
    Fu, Zanhao
    Zhu, Huaiyu
    Zhao, Yisheng
    Huan, Ruohong
    Zhang, Yi
    Chen, Shuohui
    Pan, Yun
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (11) : 6486 - 6497
  • [37] SPINet: self-supervised point cloud frame interpolation network
    Xu, Jiawen
    Le, Xinyi
    Chen, Cailian
    Guan, Xinping
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 9951 - 9960
  • [38] An autoencoder-based self-supervised learning for multimodal sentiment analysis
    Feng, Wenjun
    Wang, Xin
    Cao, Donglin
    Lin, Dazhen
    INFORMATION SCIENCES, 2024, 675
  • [39] Self-supervised Detransformation Autoencoder for Representation Learning in Open Set Recognition
    Jia, Jingyun
    Chan, Philip K.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 471 - 483
  • [40] Self-supervised variational autoencoder towards recommendation by nested contrastive learning
    Wang, Jing
    Wu, Jun
    Jia, Caiyan
    Zhang, Zhifei
    APPLIED INTELLIGENCE, 2023, 53 (15) : 18887 - 18897