SnapNav: Learning Mapless Visual Navigation with Sparse Directional Guidance and Visual Reference

被引:0
|
作者
Xie, Linhai [1 ,2 ]
Markham, Andrew [1 ]
Trigoni, Niki [1 ]
机构
[1] Univ Oxford, Dept Comp Sci, Oxford OX1 3QD, England
[2] Natl Univ Def Technol, Dept Mechatron Engn & Automat, Changsha 410073, Peoples R China
基金
英国工程与自然科学研究理事会;
关键词
D O I
10.1109/icra40945.2020.9197523
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning-based visual navigation still remains a challenging problem in robotics, with two overarching issues: how to transfer the learnt policy to unseen scenarios, and how to deploy the system on real robots. In this paper, we propose a deep neural network based visual navigation system, SnapNav. Unlike map-based navigation or Visual-Teach-and-Repeat (VT&R), SnapNav only receives a few snapshots of the environment combined with directional guidance to allow it to execute the navigation task. Additionally, SnapNav can be easily deployed on real robots due to a two-level hierarchy: a high level commander that provides directional commands and a low level controller that provides real-time control and obstacle avoidance. This also allows us to effectively use simulated and real data to train the different layers of the hierarchy, facilitating robust control. Extensive experimental results show that SnapNav achieves a highly autonomous navigation ability compared to baseline models, enabling sparse, map-less navigation in previously unseen environments.
引用
下载
收藏
页码:1682 / 1688
页数:7
相关论文
共 50 条
  • [21] Path-Following Navigation Network Using Sparse Visual Memory
    Yoo, Hwiyeon
    Kim, Nuri
    Park, Jeongho
    Oh, Songhwai
    2020 20TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS), 2020, : 883 - 886
  • [22] Building Maps for Autonomous Navigation Using Sparse Visual SLAM Features
    Ling, Yonggen
    Shen, Shaojie
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 1374 - 1381
  • [23] Visual Navigation Using Sparse Optical Flow and Time-to-Transit
    Boretti, Chiara
    Bich, Philippe
    Zhang, Yanyu
    Baillieul, John
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 9397 - 9403
  • [24] Mapless autonomous navigation for UGV in cluttered off-road environment with the guidance of wayshowers using deep reinforcement learning
    Li, Zhijian
    Li, Xu
    Hu, Jinchao
    Liu, Xixiang
    Applied Intelligence, 2025, 55 (03)
  • [25] Visual tracking based on online sparse feature learning
    Wang, Zelun
    Wang, Jinjun
    Zhang, Shun
    Gong, Yihong
    IMAGE AND VISION COMPUTING, 2015, 38 : 24 - 32
  • [26] Sparse representation and learning in visual recognition: Theory and applications
    Cheng, Hong
    Liu, Zicheng
    Yang, Lu
    Chen, Xuewen
    SIGNAL PROCESSING, 2013, 93 (06) : 1408 - 1425
  • [27] Sparse Similarity Matrix Learning for Visual Object Retrieval
    Yan, Zhicheng
    Yu, Yizhou
    2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [28] DICTIONARY LEARNING FOR A SPARSE APPEARANCE MODEL IN VISUAL TRACKING
    Rousseau, Sylvain
    Chainais, Pierre
    Garnier, Christelle
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 4506 - 4510
  • [29] Visual saliency object detection using sparse learning
    Nasiripour, Reza
    Farsi, Hassan
    Mohamadzadeh, Sajad
    IET IMAGE PROCESSING, 2019, 13 (13) : 2436 - 2447
  • [30] Visual learning given sparse data of unknown complexity
    Xiang, T
    Gong, S
    TENTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1 AND 2, PROCEEDINGS, 2005, : 701 - 708