Learning Effectively from Intervention for Visual-based Autonomous Driving

被引:0
|
作者
Deng, Yunfu [1 ,2 ]
Xu, Kun [1 ,2 ]
Hu, Yue [3 ,4 ]
Cui, Yunduan [1 ,2 ]
Xiang, Gengzhao [1 ,2 ]
Pan, Zhongming [1 ,2 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, SIAT Branch, Shenzhen 518055, Peoples R China
[3] Geely Res Inst, Zhejiang Geely Holding Grp, Ningbo 315336, Peoples R China
[4] Tsinghua Univ, Dept Automat, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Imitation learning (IL) approaches like behavioral cloning have been used successfully to learn simple visual navigation policies by learning a large amount of data from expert driving behaviors. However, scaling up to the actual driving scenarios is still challenging for the IL approaches because they rely heavily on expert demonstrations requiring labeling every state the learner visits, which is not practical. Moreover, the expert demonstrations limit the performance upper bound. This work proposes a method to accelerate the learning efficiency inspired by human apprenticeship to promote end-to-end vision-based autonomous urban driving tasks. We employ a hierarchical structure for visual navigation, where the high-level agent is trained with the ground-truth data of the environment, and the trained policy is then executed to train a purely vision-based low-level agent. Moreover, in addition to the labeled demonstrations, the expert intervenes during the training of the low-level agent and brings efficient feedback information, interactively accelerating the training process. Such intervention provides critical knowledge that can be learned effectively for dealing with complex, challenging scenarios. We evaluate the method on the original CARLA benchmark and the more complicated NoCrash benchmark. Compared to the state-of-the-art methods, the proposed method achieves similar good results but requires fewer data and learns faster, effectively improving the sample efficiency.
引用
收藏
页码:4290 / 4295
页数:6
相关论文
共 50 条
  • [21] Research on reinforcement learning based on PPO algorithm for human-machine intervention in autonomous driving
    Shi, Gaosong
    Zhao, Qinghai
    Wang, Jirong
    Dong, Xin
    ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (04): : 2424 - 2446
  • [22] Visual Evaluation for Autonomous Driving
    Hou, Yijie
    Wang, Chengshun
    Wang, Junhong
    Xue, Xiangyang
    Zhang, Xiaolong Luke
    Zhu, Jun
    Wang, Dongliang
    Chen, Siming
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (01) : 1030 - 1039
  • [23] Visual routines for autonomous driving
    Salgian, G
    Ballard, DH
    SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1998, : 876 - 882
  • [24] Visual Computing for Autonomous Driving
    Chen, Siming
    Gou, Liang
    Kamp, Michael
    Sun, Dong
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2024, 44 (03) : 11 - 13
  • [25] Optimized Deep Learning for LiDAR and Visual Odometry Fusion in Autonomous Driving
    Zhang, Dingnan
    Peng, Tao
    Loomis, John S.
    IEEE SENSORS JOURNAL, 2023, 23 (23) : 29594 - 29604
  • [26] Vehicle type classification from visual-based dimension estimation
    Lai, AHS
    Fung, GSK
    Yung, NHC
    2001 IEEE INTELLIGENT TRANSPORTATION SYSTEMS - PROCEEDINGS, 2001, : 201 - 206
  • [27] Explaining Autonomous Driving by Learning End-to-End Visual Attention
    Cultrera, Luca
    Seidenari, Lorenzo
    Becattini, Federico
    Pala, Pietro
    Del Bimbo, Alberto
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1389 - 1398
  • [28] Visual-based autonomous field of view control of laparoscope with safety-RCM constraints for semi-autonomous surgery
    Sun, Yanwen
    Pan, Bo
    Fu, Yili
    Niu, Guojun
    INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY, 2020, 16 (02):
  • [29] A Semi-supervised Learning Based on Variational Autoencoder for Visual-Based Robot Localization
    Liang, Kaiyun
    He, Fazhi
    Zhu, Yuanyuan
    Gao, Xiaoxin
    COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, CHINESECSCW 2021, PT I, 2022, 1491 : 615 - 627
  • [30] An architecture for a visual-based PNT alternative
    Critchley-Marrows J.J.R.
    Wu X.
    Cairns I.H.
    Acta Astronautica, 2023, 210 : 601 - 609