NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields

被引:0
|
作者
Irshad, Muhammad Zubair [1 ,2 ]
Zakharov, Sergey [1 ]
Guizilini, Vitor [1 ]
Gaidon, Adrien [1 ]
Kira, Zsolt [2 ]
Ambrus, Rares [1 ]
机构
[1] Toyota Res Inst, Los Altos, CA 94022 USA
[2] Georgia Tech, Atlanta, GA 30332 USA
来源
关键词
NeRF; Masked AutoEncoders; Vision Transformers; Self-Supervised Learning; Representation Learning; 3D Object Detection; Semantic Labelling;
D O I
10.1007/978-3-031-73223-2_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection. Project Page: nerf-mae.github.io
引用
收藏
页码:434 / 453
页数:20
相关论文
共 50 条
  • [41] HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
    Hsu, Wei-Ning
    Bolte, Benjamin
    Tsai, Yao-Hung Hubert
    Lakhotia, Kushal
    Salakhutdinov, Ruslan
    Mohamed, Abdelrahman
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3451 - 3460
  • [42] Masked self-supervised ECG representation learning via multiview information bottleneck
    Shunxiang Yang
    Cheng Lian
    Zhigang Zeng
    Bingrong Xu
    Yixin Su
    Chenyang Xue
    Neural Computing and Applications, 2024, 36 : 7625 - 7637
  • [43] 3D Human Pose Machines with Self-Supervised Learning
    Wang, Keze
    Lin, Liang
    Jiang, Chenhan
    Qian, Chen
    Wei, Pengxu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (05) : 1069 - 1082
  • [44] Self-Supervised Learning of Detailed 3D Face Reconstruction
    Chen, Yajing
    Wu, Fanzi
    Wang, Zeyu
    Song, Yibing
    Ling, Yonggen
    Bao, Linchao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 8696 - 8705
  • [45] Visual Reinforcement Learning With Self-Supervised 3D Representations
    Ze, Yanjie
    Hansen, Nicklas
    Chen, Yinbo
    Jain, Mohit
    Wang, Xiaolong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2890 - 2897
  • [46] Self-Supervised Online Learning of Appearance for 3D Tracking
    Lee, Bhoram
    Lee, Daniel D.
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 4930 - 4937
  • [47] Self-Supervised Deep Learning for 3D Gravity Inversion
    Li, Yinshuo
    Jia, Zhuo
    Lu, Wenkai
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [48] Self-Supervised Deep Learning for 3D Gravity Inversion
    Li, Yinshuo
    Jia, Zhuo
    Lu, Wenkai
    IEEE Transactions on Geoscience and Remote Sensing, 2022, 60
  • [49] Self-Supervised Representation Learning for Evolutionary Neural Architecture Search
    Wei, Chen
    Tang, Yiping
    Niu, Chuang Niu Chuang
    Hu, Haihong
    Wang, Yue
    Liang, Jimin
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2021, 16 (03) : 33 - 49
  • [50] Joint Supervised and Self-Supervised Learning for 3D Real World Challenges
    Alliegro, Antonio
    Boscaini, Davide
    Tommasi, Tatiana
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 6718 - 6725