Reconstructing Objects in-the-wild for Realistic Sensor Simulation

被引:2
|
作者
Yang, Ze [1 ,2 ]
Manivasagam, Sivabalan [1 ,2 ]
Chen, Yun [1 ,2 ]
Wang, Jingkang [1 ,2 ]
Hu, Rui [1 ]
Urtasun, Raquel [1 ,2 ]
机构
[1] Waabi, Toronto, ON, Canada
[2] Univ Toronto, Toronto, ON, Canada
关键词
D O I
10.1109/ICRA48891.2023.10160535
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reconstructing objects from real world data and rendering them at novel views is critical to bringing realism, diversity and scale to simulation for robotics training and testing. In this work, we present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data captured at distance and at limited viewpoints. Towards this goal, we represent the object surface as a neural signed distance function and leverage both LiDAR and camera sensor data to reconstruct smooth and accurate geometry and normals. We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data. Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views. Furthermore, we showcase composing NeuSim assets into a virtual world and generating realistic multi-sensor data for evaluating self-driving perception models. The supplementary material can be found at the project website: https://waabi.ai/research/neusim/
引用
收藏
页码:11661 / 11668
页数:8
相关论文
共 50 条
  • [1] Estimating Correspondences of Deformable Objects "In-the-wild"
    Zhou, Yuxiang
    Antonakos, Epameinondas
    Alabort-i-Medina, Joan
    Roussos, Anastasios
    Zafeiriou, Stefanos
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5791 - 5801
  • [2] Generating Realistic Images from In-the-wild Sounds
    Lee, Taegyeong
    Kang, Jeonghun
    Kim, Hyeonyu
    Kim, Taehwan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 7126 - 7136
  • [3] Recognizing Objects In-the-wild: Where Do We Stand?
    Loghmani, Mohammad Reza
    Caputo, Barbara
    Vincze, Markus
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 2170 - 2177
  • [4] SAIL: Simulation-Informed Active In-the-Wild Learning
    Short, Elaine Schaertl
    Allevato, Adam
    Thomaz, Andrea L.
    HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 468 - 477
  • [5] Enough with 'In-The-Wild'
    Ssozi-Mugarura, Fiona
    Reitmaier, Thomas
    Venter, Anja
    Blake, Edwin
    PROCEEDINGS OF THE FIRST AFRICAN CONFERENCE FOR HUMAN COMPUTER INTERACTION (AFRICHI'16), 2016, : 182 - 186
  • [6] Behavior Prediction In-The-Wild
    Georgakis, Christos
    Panagakis, Yannis
    Pantic, Maja
    2017 SEVENTH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2017, : 18 - 25
  • [7] Quality Assessment of In-the-Wild Videos
    Li, Dingquan
    Jiang, Tingting
    Jiang, Ming
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2351 - 2359
  • [8] THE IN-THE-WILD SPEECH MEDICAL CORPUS
    Correia, Joana
    Teixeira, Francisco
    Botelho, Catarina
    Trancoso, Isabel
    Raj, Bhiksha
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6973 - 6977
  • [9] Face alignment in-the-wild: A Survey
    Jin, Xin
    Tan, Xiaoyang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 162 : 1 - 22
  • [10] Wireless kinematic body sensor network for low-cost neurotechnology applications "in-the-wild"
    Gavriel, Constantinos
    Faisal, A. Aldo
    2013 6TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2013, : 1279 - 1282