Learning Effectively from Intervention for Visual-based Autonomous Driving

被引:0
|
作者
Deng, Yunfu [1 ,2 ]
Xu, Kun [1 ,2 ]
Hu, Yue [3 ,4 ]
Cui, Yunduan [1 ,2 ]
Xiang, Gengzhao [1 ,2 ]
Pan, Zhongming [1 ,2 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, SIAT Branch, Shenzhen 518055, Peoples R China
[3] Geely Res Inst, Zhejiang Geely Holding Grp, Ningbo 315336, Peoples R China
[4] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Imitation learning (IL) approaches like behavioral cloning have been used successfully to learn simple visual navigation policies by learning a large amount of data from expert driving behaviors. However, scaling up to the actual driving scenarios is still challenging for the IL approaches because they rely heavily on expert demonstrations requiring labeling every state the learner visits, which is not practical. Moreover, the expert demonstrations limit the performance upper bound. This work proposes a method to accelerate the learning efficiency inspired by human apprenticeship to promote end-to-end vision-based autonomous urban driving tasks. We employ a hierarchical structure for visual navigation, where the high-level agent is trained with the ground-truth data of the environment, and the trained policy is then executed to train a purely vision-based low-level agent. Moreover, in addition to the labeled demonstrations, the expert intervenes during the training of the low-level agent and brings efficient feedback information, interactively accelerating the training process. Such intervention provides critical knowledge that can be learned effectively for dealing with complex, challenging scenarios. We evaluate the method on the original CARLA benchmark and the more complicated NoCrash benchmark. Compared to the state-of-the-art methods, the proposed method achieves similar good results but requires fewer data and learns faster, effectively improving the sample efficiency.
引用
收藏
页码:4290 / 4295
页数:6
相关论文
共 50 条
  • [1] Visual-based Autonomous Driving Deployment from a Stochastic and Uncertainty-aware Perspective
    Tai, Lei
    Yun, Peng
    Chen, Yuying
    Liu, Congcong
    Ye, Haoyang
    Liu, Ming
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 2622 - 2628
  • [2] Visual-Based Navigation of an Autonomous Tugboat
    Tall, M. H.
    Rynne, P. F.
    Lorio, J. M.
    von Ellenrieder, K. D.
    OCEANS 2009, VOLS 1-3, 2009, : 688 - 696
  • [3] Visual-based assistance for electric vehicle driving
    Finnefrock, M
    Jiang, XH
    Motai, Y
    2005 IEEE INTELLIGENT VEHICLES SYMPOSIUM PROCEEDINGS, 2005, : 656 - 661
  • [4] Visual-Based Navigation of an Autonomous Surface Vehicle
    Tall, M. H.
    Rynne, P. F.
    Lorio, J. M.
    von Ellenrieder, K. D.
    MARINE TECHNOLOGY SOCIETY JOURNAL, 2010, 44 (02) : 37 - 45
  • [5] Visual-Based Localization for Autonomous Vehicles in Simulated Environments
    Caldeira, Tiago
    Santos-Victor, Jose
    Lima, Pedro U.
    2023 21ST INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS, ICAR, 2023, : 238 - 243
  • [6] Visual-based motion planning for autonomous humanoid service robot
    Song, Ji Xue
    Ping, Xu
    Chen, Niu Guo
    SMART MATERIALS AND INTELLIGENT SYSTEMS, PTS 1 AND 2, 2011, 143-144 : 1031 - 1035
  • [7] Learning Modules for Visual-Based Position Tracking and Path Controlling of Autonomous Robots Using Pure Pursuit
    Kaewkorn, Supod
    IMPACT OF THE 4TH INDUSTRIAL REVOLUTION ON ENGINEERING EDUCATION, ICL2019, VOL 1, 2020, 1134 : 934 - 945
  • [8] NLMAP - Visual-based Self Localization and Mapping for Autonomous Underwater Vehicles
    Botelho, Silvia
    Drews, Paulo, Jr.
    Leivas, Gabriel
    OCEANS 2008, VOLS 1-4, 2008, : 2050 - 2055
  • [9] Visual-based Autonomous Unmanned Aerial Vehicle for Inspection in Indoor Environments
    Grando, Ricardo B.
    Pinheiro, Pedro M.
    Bortoluzzi, Nicolas P.
    da Silva, Cesar B.
    Zauk, Otavio F.
    Pineiro, Mariana O.
    Aoki, Vivian M.
    Kelbouscas, Andre L. S.
    Lima, Ylanna B.
    Drews-Jr, Paulo L. J.
    Neto, Armando A.
    2020 XVIII LATIN AMERICAN ROBOTICS SYMPOSIUM, 2020 XII BRAZILIAN SYMPOSIUM ON ROBOTICS AND 2020 XI WORKSHOP OF ROBOTICS IN EDUCATION (LARS-SBR-WRE 2020), 2020, : 353 - 358
  • [10] Visual-based Navigation Strategy for Autonomous Underwater Vehicles in Monitoring Scenarios
    Ruscio, F.
    Tani, S.
    Bresciani, M.
    Caiti, A.
    Costanzi, R.
    IFAC PAPERSONLINE, 2022, 55 (31): : 369 - 374