NeRF-MAE: Masked AutoEncoders for Self-supervised 3D Representation Learning for Neural Radiance Fields

被引:0
|
作者
Irshad, Muhammad Zubair [1 ,2 ]
Zakharov, Sergey [1 ]
Guizilini, Vitor [1 ]
Gaidon, Adrien [1 ]
Kira, Zsolt [2 ]
Ambrus, Rares [1 ]
机构
[1] Toyota Res Inst, Los Altos, CA 94022 USA
[2] Georgia Tech, Atlanta, GA 30332 USA
来源
关键词
NeRF; Masked AutoEncoders; Vision Transformers; Self-Supervised Learning; Representation Learning; 3D Object Detection; Semantic Labelling;
D O I
10.1007/978-3-031-73223-2_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural fields excel in computer vision and robotics due to their ability to understand the 3D visual world such as inferring semantics, geometry, and dynamics. Given the capabilities of neural fields in densely representing a 3D scene from 2D images, we ask the question: Can we scale their self-supervised pretraining, specifically using masked autoencoders, to generate effective 3D representations from posed RGB images. Owing to the astounding success of extending transformers to novel data modalities, we employ standard 3D Vision Transformers to suit the unique formulation of NeRFs. We leverage NeRF's volumetric grid as a dense input to the transformer, contrasting it with other 3D representations such as pointclouds where the information density can be uneven, and the representation is irregular. Due to the difficulty of applying masked autoencoders to an implicit representation, such as NeRF, we opt for extracting an explicit representation that canonicalizes scenes across domains by employing the camera trajectory for sampling. Our goal is made possible by masking random patches from NeRF's radiance and density grid and employing a standard 3D Swin Transformer to reconstruct the masked patches. In doing so, the model can learn the semantic and spatial structure of complete scenes. We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images. Once pretrained, the encoder is used for effective 3D transfer learning. Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks. Utilizing unlabeled posed 2D data for pretraining, NeRF-MAE significantly outperforms self-supervised 3D pretraining and NeRF scene understanding baselines on Front3D and ScanNet datasets with an absolute performance improvement of over 20% AP50 and 8% AP25 for 3D object detection. Project Page: nerf-mae.github.io
引用
收藏
页码:434 / 453
页数:20
相关论文
共 50 条
  • [31] Hybrid Supervised and Self-Supervised Learning for 3D Printing Optimization: A Masked Supervised Bootstrap Your Own Latent Approach
    Nguyen, Phuong Dong
    Dao, Manh Binh
    Nguyen, Thanh Q.
    3D PRINTING AND ADDITIVE MANUFACTURING, 2025,
  • [32] Neural Radiance Fields (NeRF) for 3D Reconstruction of Monocular Endoscopic Video in Sinus Surgery
    Ruthberg, Jeremy S.
    Bly, Randall
    Gunderson, Nicole
    Chen, Pengcheng
    Alighezi, Mahdi
    Seibel, Eric J.
    Abuzeid, Waleed M.
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2025,
  • [33] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
    Wang, Rui
    Chen, Dongdong
    Wu, Zuxuan
    Chen, Yinpeng
    Dai, Xiyang
    Liu, Mengchen
    Yuan, Lu
    Jiang, Yu-Gang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6312 - 6322
  • [34] Fully Self-Supervised Out-of-Domain Few-Shot Learning with Masked Autoencoders
    Walsh, Reece
    Osman, Islam
    Abdelaziz, Omar
    Shehata, Mohamed S.
    JOURNAL OF IMAGING, 2024, 10 (01)
  • [35] Self-Supervised 3D Representation Learning of Dressed Humans From Social Media Videos
    Jafarian, Yasamin
    Park, Hyun Soo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8969 - 8983
  • [36] SSRL: Self-Supervised Spatial-Temporal Representation Learning for 3D Action Recognition
    Jin, Zhihao
    Wang, Yifan
    Wang, Qicong
    Shen, Yehu
    Meng, Hongying
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 274 - 285
  • [37] Self-Supervised Learning of Skeleton-Aware Morphological Representation for 3D Neuron Segments
    Zhu, Daiyi
    Chen, Qihua
    Chen, Xuejin
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 1436 - 1445
  • [38] Towards Latent Masked Image Modeling for Self-supervised Visual Representation Learning
    Wei, Yibing
    Gupta, Abhinav
    Morgado, Pedro
    COMPUTER VISION - ECCV 2024, PT XXXIX, 2025, 15097 : 1 - 17
  • [39] Cross-View Masked Model for Self-Supervised Graph Representation Learning
    Duan H.
    Yu B.
    Xie C.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (11): : 1 - 13
  • [40] Masked self-supervised ECG representation learning via multiview information bottleneck
    Yang, Shunxiang
    Lian, Cheng
    Zeng, Zhigang
    Xu, Bingrong
    Su, Yixin
    Xue, Chenyang
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (14): : 7625 - 7637