Evolutionary End-to-End Autonomous Driving Model With Continuous-Time Neural Networks

被引:0
|
作者
Du, Jiatong [1 ]
Bai, Yulong [1 ]
Li, Ye [1 ]
Geng, Jiaheng [1 ]
Huang, Yanjun [1 ,2 ]
Chen, Hong [3 ]
机构
[1] Tongji Univ, Sch Automot Studies, Shanghai 201804, Peoples R China
[2] Frontiers Sci Ctr Intelligent Autonomous Syst, Shanghai 200120, Peoples R China
[3] Tongji Univ, Clean Energy Automot Engn Ctr, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain modeling; Biological neural networks; Data models; Cameras; Training; Task analysis; Mathematical models; Continuous-time neural networks; end-to-end autonomous driving (AD); evolutionary method; generative model;
D O I
10.1109/TMECH.2024.3402126
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The end-to-end paradigm has gained considerable attention in autonomous driving due to its anticipated performance. However, prevailing end-to-end paradigms predominantly employ one-shot training using imitation learning, resulting in models lacking evolutionary capabilities and struggling with long-tail scenarios. Furthermore, addressing these long-tail scenarios necessitates end-to-end models to simultaneously exhibit the generalizability of environmental representations and the robustness of control policies. Therefore, this paper proposes an end-to-end autonomous driving model called GPCT, using a Generative Perception network and a Continuous-Time brain neural network, with a Policy-Reward-Data-Aggregation (PRDA) mechanism. Specifically, the generative perception network extracts perceptual information from monocular camera inputs and undergoes distribution fitting and sampling to obtain environmental dynamics information. Subsequently, the sequential environmental dynamics information is fed into continuous-time brain neural networks to output the control information. The end-to-end model is then applied to on-policy scenarios using the PRDA mechanism to collect data for further training and evolution. Data is collected within the Carla simulator, followed by model training, and the utilization of a multi-round PRDA mechanism for data collection and training to facilitate model evolution. The algorithm's performance improves by 63.85% after five evolution experiments. In the transfer experiments, the proposed algorithm achieves a route completion rate close to 100% and maintains a driving score of around 60%, even surpassing the performance of systems equipped with multiple cameras and LiDAR. Furthermore, under heavy fog conditions, the route completion rate remains at 85%, showcasing generalizability and robustness.
引用
收藏
页码:2983 / 2990
页数:8
相关论文
共 50 条
  • [21] Integrating End-to-End Learned Steering into Probabilistic Autonomous Driving
    Huhschneider, Christian
    Bauer, Andre
    Doll, Jens
    Weber, Michael
    Klemm, Sebastian
    Kuhnt, Florian
    Zoellner, J. Marius
    [J]. 2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2017,
  • [22] An End-to-End solution to Autonomous Driving based on Xilinx FPGAd
    Wu, Tianze
    Liu, Weiyi
    Jin, Yongwei
    [J]. 2019 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2019), 2019, : 427 - 430
  • [23] An End-to-End Curriculum Learning Approach for Autonomous Driving Scenarios
    Anzalone, Luca
    Barra, Paola
    Barra, Silvio
    Castiglione, Aniello
    Nappi, Michele
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 19817 - 19826
  • [24] Autonomous Driving Control Using End-to-End Deep Learning
    Lee, Myoung-jae
    Ha, Young-guk
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 470 - 473
  • [25] Explaining Autonomous Driving by Learning End-to-End Visual Attention
    Cultrera, Luca
    Seidenari, Lorenzo
    Becattini, Federico
    Pala, Pietro
    Del Bimbo, Alberto
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 1389 - 1398
  • [26] End-to-End Driving Model for Steering Control of Autonomous Vehicles with Future Spatiotemporal Features
    Wu, Tianhao
    Luo, Ao
    Huang, Rui
    Cheng, Hong
    Zhao, Yang
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 950 - 955
  • [27] End-to-end autonomous driving decision model joined by attention mechanism and spatiotemporal features
    Zhao, Xiangmo
    Qi, Mingyuan
    Liu, Zhanwen
    Fan, Songhua
    Li, Chao
    Dong, Ming
    [J]. IET INTELLIGENT TRANSPORT SYSTEMS, 2021, 15 (09) : 1119 - 1130
  • [28] AutoRS: Environment-Dependent Real-Time Scheduling for End-to-End Autonomous Driving
    Ma, Jialiang
    Li, Li
    Xu, Chengzhong
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (12) : 3238 - 3252
  • [29] Think Twice before Driving: Towards Scalable Decoders for End-to-End Autonomous Driving
    Jia, Xiaosong
    Wu, Penghao
    Chen, Li
    Xie, Jiangwei
    He, Conghui
    Yan, Junchi
    Li, Hongyang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21983 - 21994
  • [30] End-to-End Deep Neural Network Architectures for Speed and Steering Wheel Angle Prediction in Autonomous Driving
    Navarro, Pedro J.
    Miller, Leanne
    Rosique, Francisca
    Fernandez-Isla, Carlos
    Gila-Navarro, Alberto
    [J]. ELECTRONICS, 2021, 10 (11)