NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields

被引:0
|
作者
Irshad, Muhammad Zubair [1 ,2 ]
Zakharov, Sergey [1 ]
Guizilini, Vitor [1 ]
Gaidon, Adrien [1 ]
Kira, Zsolt [2 ]
Ambrus, Rares [1 ]
机构
[1] Toyota Res Inst, Los Altos, CA 94022 USA
[2] Georgia Tech, Atlanta, GA 30332 USA
来源
关键词
NeRF; Masked AutoEncoders; Vision Transformers; Self-Supervised Learning; Representation Learning; 3D Object Detection; Semantic Labelling;
D O I
10.1007/978-3-031-73223-2_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection. Project Page: nerf-mae.github.io
引用
收藏
页码:434 / 453
页数:20
相关论文
共 50 条
  • [1] PatchMixing Masked Autoencoders for 3D Point Cloud Self-Supervised Learning
    Lin, Chengxing
    Xu, Wenju
    Zhu, Jian
    Nie, Yongwei
    Cai, Ruichu
    Xu, Xuemiao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (10) : 9882 - 9897
  • [2] ViC-MAE: Self-supervised Representation Learning from Images and Video with Contrastive Masked Autoencoders
    Hernandez, Jefferson
    Villegas, Ruben
    Ordonez, Vicente
    COMPUTER VISION-ECCV 2024, PT IV, 2025, 15062 : 444 - 463
  • [3] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [4] CMAE-3D: Contrastive Masked AutoEncoders for Self-Supervised 3D Object Detection
    Zhang, Yanan
    Chen, Jiaxin
    Huang, Di
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 2783 - 2804
  • [5] MGM-AE: Self-Supervised Learning on 3D Shape Using Mesh Graph Masked Autoencoders
    Yang, Zhangsihao
    Ding, Kaize
    Liu, Huan
    Wang, Yalin
    2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024, 2024, : 3291 - 3301
  • [6] SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance Fields
    Cao, Anh-Quan
    de Charette, Raoul
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9353 - 9364
  • [7] rPPG-MAE: Self-Supervised Pretraining With Masked Autoencoders for Remote Physiological Measurements
    Liu, Xin
    Zhang, Yuting
    Yu, Zitong
    Lu, Hao
    Yue, Huanjing
    Yang, Jingyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7278 - 7293
  • [8] Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with Masked Autoencoders
    Cheng, Jie
    Mei, Xiaodong
    Liu, Ming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 8645 - 8655
  • [9] Masked Autoencoders in 3D Point Cloud Representation Learning
    Jiang, Jincen
    Lu, Xuequan
    Zhao, Lizhi
    Dazeley, Richard
    Wang, Meili
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 820 - 831
  • [10] EndoSelf: Self-supervised Monocular 3D Scene Reconstruction of Deformable Tissues with Neural Radiance Fields on Endoscopic Videos
    Li, Wenda
    Hayashi, Yuichiro
    Oda, Masahiro
    Kitasaka, Takayuki
    Misawa, Kazunari
    Mori, Kensaku
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT VI, 2024, 15006 : 241 - 251