Vision-Only Robot Navigation in a Neural Radiance World

被引:85
|
作者
Adamkiewicz, Michal [1 ]
Chen, Timothy [2 ]
Caccavale, Adam [3 ]
Gardner, Rachel [1 ]
Culbertson, Preston [3 ]
Bohg, Jeannette [1 ]
Schwager, Mac [2 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Stanford Univ, Dept Aeronaut & Astronaut, Stanford, CA 94305 USA
[3] Stanford Univ, Dept Mech Engn, Stanford, CA 94305 USA
基金
美国国家科学基金会;
关键词
Collision avoidance; localization; motion and path planning; vision-based navigation; neural radiance fields; TRAJECTORY GENERATION; OPTIMIZATION; FIELDS;
D O I
10.1109/LRA.2022.3150497
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Neural Radiance Fields (NeRFs) have recently emerged as a powerful paradigm for the representation of natural, complex 3D scenes. Neural Radiance Fields (NeRFs) represent continuous volumetric density and RGB values in a neural network, and generate photo-realistic images from unseen camera viewpoints through ray tracing. We propose an algorithm for navigating a robot through a 3D environment represented as a NeRF using only an onboard RGB camera for localization. We assume the NeRF for the scene has been pre-trained offline, and the robot's objective is to navigate through unoccupied space in the NeRF to reach a goal pose. We introduce a trajectory optimization algorithm that avoids collisions with high-density regions in the NeRF based on a discrete time version of differential flatness that is amenable to constraining the robot's full pose and control inputs. We also introduce an optimization based filtering method to estimate 6DoF pose and velocities for the robot in the NeRF given only an onboard RGB camera. We combine the trajectory planner with the pose filter in an online replanning loop to give a vision-based robot navigation pipeline. We present simulation results with a quadrotor robot navigating through a jungle gym environment, the inside of a church, and Stonehenge using only an RGB camera. We also demonstrate an omnidirectional ground robot navigating through the church, requiring it to reorient to fit through a narrow gap.
引用
收藏
页码:4606 / 4613
页数:8
相关论文
共 50 条
  • [31] Omnidirectional Vision for Mobile Robot Navigation
    Taha, Zahari
    Chew, Jouh Yeong
    Yap, Hwa Jen
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2010, 14 (01) : 55 - 62
  • [32] Navigation and Vision System of a Mobile Robot
    Baslan, Naya
    Heerklotz, Sabrina
    Weber, Stefan
    Heerklotz, Alwin
    Hoefig, Bernhard
    Abu-Khalaf, Jumana
    PROCEEDINGS OF THE 2018 19TH INTERNATIONAL CONFERENCE ON RESEARCH AND EDUCATION IN MECHATRONICS (REM 2018), 2018, : 99 - 104
  • [33] Vision for mobile robot navigation: A survey
    DeSouza, GN
    Kak, AC
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (02) : 237 - 267
  • [34] Vision system for mobile robot navigation
    Boudihir, M.Elarbi
    Dufaut, M.
    Husson, R.
    Robotica, 1994, 12 (pt 1) : 77 - 89
  • [35] Evolving a vision-driven robot controller for real-world indoor navigation
    Gajda, Pawel
    Krawiec, Krzysztof
    APPLICATIONS OF EVOLUTIONARY COMPUTING, PROCEEDINGS, 2008, 4974 : 184 - 193
  • [36] LookUP: Vision-Only Real-Time Precise Underground Localisation for Autonomous Mining Vehicles
    Zeng, Fan
    Jacobson, Adam
    Smith, David
    Boswell, Nigel
    Peynot, Thierry
    Milford, Michael
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 1444 - 1450
  • [37] Vision-only pose and relative distance estimation for leading quadrotor UAV from following UAV
    Xu, Xiangpeng
    Li, Chujun
    Sheng Zhuge
    Yang, Xia
    Khoo, Boo Cheong
    Srigrarom, Sutthiphong
    Chan, Wee Kiat
    Leong, Wai Lun
    Lin, Bin
    Zhang, Xiaohu
    2024 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2024, : 1080 - 1084
  • [38] Implicit Neural Mapping for a Data Closed-Loop Unmanned Aerial Vehicle Pose-Estimation Algorithm in a Vision-Only Landing System
    Liu, Xiaoxiong
    Li, Changze
    Xu, Xinlong
    Yang, Nan
    Qin, Bin
    DRONES, 2023, 7 (08)
  • [39] INTELLIGENT WHEELCHAIR ROBOT BASED ON VISION NAVIGATION
    Jiao, J.
    Zheng, X. J.
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2018, 122 : 57 - 57
  • [40] Uncalibrated vision based on lines for robot navigation
    Guerrero, JJ
    Sagüés, C
    MECHATRONICS, 2001, 11 (06) : 759 - 777