NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields

被引:0
|
作者
Irshad, Muhammad Zubair [1 ,2 ]
Zakharov, Sergey [1 ]
Guizilini, Vitor [1 ]
Gaidon, Adrien [1 ]
Kira, Zsolt [2 ]
Ambrus, Rares [1 ]
机构
[1] Toyota Res Inst, Los Altos, CA 94022 USA
[2] Georgia Tech, Atlanta, GA 30332 USA
来源
关键词
NeRF; Masked AutoEncoders; Vision Transformers; Self-Supervised Learning; Representation Learning; 3D Object Detection; Semantic Labelling;
D O I
10.1007/978-3-031-73223-2_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection. Project Page: nerf-mae.github.io
引用
收藏
页码:434 / 453
页数:20
相关论文
共 50 条
  • [21] Domain Invariant Masked Autoencoders for Self-supervised Learning from Multi-domains
    Yang, Haiyang
    Tang, Shixiang
    Chen, Meilin
    Wang, Yizhou
    Zhu, Feng
    Bai, Lei
    Zhao, Rui
    Ouyang, Wanli
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 151 - 168
  • [22] GMAEEG: A Self-Supervised Graph Masked Autoencoder for EEG Representation Learning
    Fu, Zanhao
    Zhu, Huaiyu
    Zhao, Yisheng
    Huan, Ruohong
    Zhang, Yi
    Chen, Shuohui
    Pan, Yun
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (11) : 6486 - 6497
  • [23] Self-Supervised 3D Behavior Representation Learning Based on Homotopic Hyperbolic Embedding
    Chen, Jinghong
    Jin, Zhihao
    Wang, Qicong
    Meng, Hongying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6061 - 6074
  • [24] Self-Supervised 3D Behavior Representation Learning Based on Homotopic Hyperbolic Embedding
    Chen, Jinghong
    Jin, Zhihao
    Wang, Qicong
    Meng, Hongying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6061 - 6074
  • [25] CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation
    Wu, Xiuzhe
    Dai, Peng
    Deng, Weipeng
    Chen, Handi
    Wu, Yang
    Cao, Yan-Pei
    Shan, Ying
    Qi, Xiaojuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] GMAEEG: A Self-Supervised Graph Masked Autoencoder for EEG Representation Learning
    Fu, Zanhao
    Zhu, Huaiyu
    Zhao, Yisheng
    Huan, Ruohong
    Zhang, Yi
    Chen, Shuohui
    Pan, Yun
    IEEE Journal of Biomedical and Health Informatics, 2024, 28 (11): : 6486 - 6497
  • [27] Spatio-temporal Self-Supervised Representation Learning for 3D Point Clouds
    Huang, Siyuan
    Degrees, Yichen Xie
    Zhu, Song-Chun
    Zhu, Yixin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6515 - 6525
  • [28] Mutual information guided 3D ResNet for self-supervised video representation learning
    Xue, Fei
    Ji, Hongbing
    Zhang, Wenbo
    IET IMAGE PROCESSING, 2020, 14 (13) : 3066 - 3075
  • [29] Trusted 3D self-supervised representation learning with cross-modal settings
    Han, Xu
    Cheng, Haozhe
    Shi, Pengcheng
    Zhu, Jihua
    MACHINE VISION AND APPLICATIONS, 2024, 35 (04)
  • [30] Self-supervised 3D Skeleton Action Representation Learning with Motion Consistency and Continuity
    Su, Yukun
    Lin, Guosheng
    Wu, Qingyao
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13308 - 13318