Comparing deep reinforcement learning architectures for autonomous racing

被引:1
|
作者
Evans, Benjamin David [1 ]
Jordaan, Hendrik Willem [1 ]
Engelbrecht, Herman Arnold [1 ]
机构
[1] Stellenbosch Univ Elect & Elect Engn, Dept Elect & Elect Engn, Banghoek Rd, Stellenbosch, South Africa
来源
关键词
Deep reinforcement learning; End-to-end driving; Autonomous racing; Trajectory planning;
D O I
10.1016/j.mlwa.2023.100496
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In classical autonomous racing, a perception, planning, and control pipeline is employed to navigate vehicles around a track as quickly as possible. In contrast, neural network controllers have been used to replace either part of or the entire pipeline. This paper compares three deep learning architectures for F1Tenth autonomous racing: full planning, which replaces the global and local planner, trajectory tracking, which replaces the local planner and end -to -end, which replaces the entire pipeline. The evaluation contrasts two reward signals, compares the DDPG, TD3 and SAC algorithms and investigates the generality of the learned policies to different test maps. Training the agents in simulation shows that the full planning agent has the most robust training and testing performance. The trajectory tracking agents achieve fast lap times on the training map but low completion rates on different test maps. Transferring the trained agents to a physical F1Tenth car reveals that the trajectory tracking and full planning agents transfer poorly, displaying rapid side -to -side swerving (slaloming). In contrast, the end -to -end agent, the worst performer in simulation, transfers the best to the physical vehicle and can complete the test track with a maximum speed of 5 m/s. These results show that planning methods outperform end -to -end approaches in simulation performance, but end -to -end approaches transfer better to physical robots.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Autonomous Drone Racing with Deep Reinforcement Learning
    Song, Yunlong
    Steinweg, Mats
    Kaufmann, Elia
    Scaramuzza, Davide
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1205 - 1212
  • [2] Autonomous Car Racing in Simulation Environment Using Deep Reinforcement Learning
    Guckiran, Kivanc
    Bolat, Bulent
    [J]. 2019 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE (ASYU), 2019, : 329 - 334
  • [3] Autonomous multi-drone racing method based on deep reinforcement learning
    Yu KANG
    Jian DI
    Ming LI
    Yunbo ZHAO
    Yuhui WANG
    [J]. ScienceChina(InformationSciences), 2024, 67 (08) : 35 - 48
  • [4] Autonomous multi-drone racing method based on deep reinforcement learning
    Kang, Yu
    Di, Jian
    Li, Ming
    Zhao, Yunbo
    Wang, Yuhui
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (08)
  • [5] Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning
    Cai, Peide
    Wang, Hengli
    Huang, Huaiyang
    Liu, Yuxuan
    Liu, Ming
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 7262 - 7269
  • [6] Challenges and Opportunities of Applying Reinforcement Learning to Autonomous Racing
    Wurman, Peter R.
    Stone, Peter
    Spranger, Michael
    [J]. IEEE INTELLIGENT SYSTEMS, 2022, 37 (03) : 20 - 23
  • [7] High-Speed Autonomous Racing Using Trajectory-Aided Deep Reinforcement Learning
    Evans, Benjamin David
    Engelbrecht, Herman Arnold
    Jordaan, Hendrik Willem
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5353 - 5359
  • [8] Safe reinforcement learning for high-speed autonomous racing
    Evans B.D.
    Jordaan H.W.
    Engelbrecht H.A.
    [J]. Cognitive Robotics, 2023, 3 : 107 - 126
  • [9] Image-based Regularization for Action Smoothness in Autonomous Miniature Racing Car with Deep Reinforcement Learning
    Cao, Hoang-Giang
    Lee, I.
    Hsu, Bo-Jiun
    Lee, Zheng-Yi
    Shih, Yu-Wei
    Wang, Hsueh-Cheng
    Wu, I-Chen
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5179 - 5186
  • [10] Action Branching Architectures for Deep Reinforcement Learning
    Tavakoli, Arash
    Pardo, Fabio
    Kormushev, Petar
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 4131 - 4138