Regress Before Construct: Regress Autoencoder for Point Cloud Self-supervised Learning

被引:1
|
作者
Liu, Yang [1 ]
Chen, Chen [2 ]
Wang, Can [3 ,4 ]
King, Xulin [5 ]
Liu, Mengyuan [6 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu, Peoples R China
[2] Univ Cent Florida, Ctr Res Comp Vis, Orlando, FL USA
[3] Univ Kiel, Dept Comp Sci, Lab Multimedia Informat Proc, Kiel, Germany
[4] Hangzhou Linxrobot Co, Hangzhou, Peoples R China
[5] Hangzhou GOTHEN Technol Co Ltd, Hangzhou, Peoples R China
[6] Peking Univ, Shenzhen Grad Sch, Key Lab Machine Percept, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
point clouds; masked point modeling; self-supervised learning; pre-training;
D O I
10.1145/3581783.3612106
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for both 2D and 3D computer vision. Nevertheless, existing MAE-based methods still have certain drawbacks. Firstly, the functional decoupling between the encoder and decoder is incomplete, which limits the encoder's representation learning ability. Secondly, downstream tasks solely utilize the encoder, failing to fully leverage the knowledge acquired through the encoder-decoder architecture in the pre-text task. In this paper, we propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning. The proposed method decouples functions between the decoder and the encoder by introducing a mask regressor, which predicts the masked patch representation from the visible patch representation encoded by the encoder and the decoder reconstructs the target from the predicted masked patch representation. By doing so, we minimize the impact of decoder updates on the representation space of the encoder. Moreover, we introduce an alignment constraint to ensure that the representations for masked patches, predicted from the encoded representations of visible patches, are aligned with the masked patch presentations computed from the encoder. To make full use of the knowledge learned in the pre-training stage, we design a new finetune mode for the proposed Point-RAE. Extensive experiments demonstrate that our approach is efficient during pre-training and generalizes well on various downstream tasks. Specifically, our pre-trained models achieve a high accuracy of 90.28% on the ScanObjectNN hardest split and 94.1% accuracy on ModelNet40, surpassing all the other self-supervised learning methods. Our code and pretrained model are public available at: https://github.com/liuyyy111/Point-RAE.
引用
收藏
页码:1738 / 1749
页数:12
相关论文
共 50 条
  • [1] Self-Supervised Learning for Point-Cloud Classification by a Multigrid Autoencoder
    Zhai, Ruifeng
    Song, Junfeng
    Hou, Shuzhao
    Gao, Fengli
    Li, Xueyan
    SENSORS, 2022, 22 (21)
  • [2] Implicit Autoencoder for Point-Cloud Self-Supervised Representation Learning
    Yan, Siming
    Yang, Zhenpei
    Li, Haoxiang
    Song, Chen
    Guan, Li
    Kang, Hao
    Hua, Gang
    Huang, Qixing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 14484 - 14496
  • [3] Self-supervised learning for point cloud data: A survey
    Zeng, Changyu
    Wang, Wei
    Nguyen, Anh
    Xiao, Jimin
    Yue, Yutao
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [4] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [5] Point cloud self-supervised learning for machining feature recognition
    Zhang, Hang
    Wang, Wenhu
    Zhang, Shusheng
    Wang, Zhen
    Zhang, Yajun
    Zhou, Jingtao
    Huang, Bo
    JOURNAL OF MANUFACTURING SYSTEMS, 2024, 77 : 78 - 95
  • [6] Context Autoencoder for Self-supervised Representation Learning
    Xiaokang Chen
    Mingyu Ding
    Xiaodi Wang
    Ying Xin
    Shentong Mo
    Yunhao Wang
    Shumin Han
    Ping Luo
    Gang Zeng
    Jingdong Wang
    International Journal of Computer Vision, 2024, 132 : 208 - 223
  • [7] Context Autoencoder for Self-supervised Representation Learning
    Chen, Xiaokang
    Ding, Mingyu
    Wang, Xiaodi
    Xin, Ying
    Mo, Shentong
    Wang, Yunhao
    Han, Shumin
    Luo, Ping
    Zeng, Gang
    Wang, Jingdong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 132 (1) : 208 - 223
  • [8] Inter-Modal Masked Autoencoder for Self-Supervised Learning on Point Clouds
    Liu, Jiaming
    Wu, Yue
    Gong, Maoguo
    Liu, Zhixiao
    Miao, Qiguang
    Ma, Wenping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3897 - 3908
  • [9] Self-Supervised Learning of Part Mobility from Point Cloud Sequence
    Shi, Yahao
    Cao, Xinyu
    Zhou, Bin
    COMPUTER GRAPHICS FORUM, 2021, 40 (06) : 104 - 116
  • [10] Contrastive Predictive Autoencoders for Dynamic Point Cloud Self-Supervised Learning
    Sheng, Xiaoxiao
    Shen, Zhiqiang
    Xiao, Gang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 8, 2023, : 9802 - 9810