High-Speed Autonomous Racing Using Trajectory-Aided Deep Reinforcement Learning

被引:5
|
作者
Evans, Benjamin David [1 ]
Engelbrecht, Herman Arnold [1 ]
Jordaan, Hendrik Willem [1 ]
机构
[1] Stellenbosch Univ, Dept Elect & Elect Engn, ZA-7600 Stellenbosch, South Africa
关键词
Deep learning methods; machine learning for robot control; reinforcement learning;
D O I
10.1109/LRA.2023.3295252
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The classical method of autonomous racing uses real-time localisation to follow a precalculated optimal trajectory. In contrast, end-to-end deep reinforcement learning (DRL) can train agents to race using only raw LiDAR scans. While classical methods prioritise optimization for high-performance racing, DRL approaches have focused on low-performance contexts with little consideration of the speed profile. This work addresses the problem of using end-to-end DRL agents for high-speed autonomous racing. We present trajectory-aided learning (TAL) that trains DRL agents for high-performance racing by incorporating the optimal trajectory (racing line) into the learning formulation. Our method is evaluated using the TD3 algorithm on four maps in the open-source F1Tenth simulator. The results demonstrate that our method achieves a significantly higher lap completion rate at high speeds compared to the baseline. This is due to TAL training the agent to select a feasible speed profile of slowing down in the corners and roughly tracking the optimal trajectory.
引用
收藏
页码:5353 / 5359
页数:7
相关论文
共 50 条
  • [1] Safe reinforcement learning for high-speed autonomous racing
    Evans B.D.
    Jordaan H.W.
    Engelbrecht H.A.
    [J]. Cognitive Robotics, 2023, 3 : 107 - 126
  • [2] High-Speed Autonomous Drifting With Deep Reinforcement Learning
    Cai, Peide
    Mei, Xiaodong
    Tai, Lei
    Sun, Yuxiang
    Liu, Ming
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (02): : 1247 - 1254
  • [3] Two-Stage Safe Reinforcement Learning for High-Speed Autonomous Racing
    Niu, Jingyu
    Hu, Yu
    Jin, Beibei
    Han, Yinhe
    Li, Xiaowei
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3934 - 3941
  • [4] High-Speed Racing Reinforcement Learning Network: Learning the Environment Using Scene Graphs
    Shi, Jingjing
    Li, Ruiqin
    Yu, Daguo
    [J]. IEEE ACCESS, 2024, 12 : 116771 - 116785
  • [5] High-Speed Collision Avoidance using Deep Reinforcement Learning and Domain Randomization for Autonomous Vehicles
    Kontes, Georgios D.
    Scherer, Daniel D.
    Nisslbeck, Tim
    Fischer, Janina
    Mutschler, Christopher
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [6] Autonomous Drone Racing with Deep Reinforcement Learning
    Song, Yunlong
    Steinweg, Mats
    Kaufmann, Elia
    Scaramuzza, Davide
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1205 - 1212
  • [7] Autonomous Car Racing in Simulation Environment Using Deep Reinforcement Learning
    Guckiran, Kivanc
    Bolat, Bulent
    [J]. 2019 INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS CONFERENCE (ASYU), 2019, : 329 - 334
  • [8] Comparing deep reinforcement learning architectures for autonomous racing
    Evans, Benjamin David
    Jordaan, Hendrik Willem
    Engelbrecht, Herman Arnold
    [J]. MACHINE LEARNING WITH APPLICATIONS, 2023, 14
  • [9] RACECAR - The Dataset for High-Speed Autonomous Racing
    Kulkarni, Amar
    Chrosniak, John
    Ducote, Emory
    Sauerbeck, Florian
    Saba, Andrew
    Chirimar, Utkarsh
    Link, John
    Cellina, Marcello
    Behl, Madhur
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 11458 - 11463
  • [10] Deep Reinforcement Learning for Interference Suppression in RIS-Aided High-Speed Railway Networks
    Xu, Jianpeng
    Ai, Bo
    Quek, Tony Q. S.
    Liuc, Yupei
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2022, : 337 - 342