Path planning via reinforcement learning with closed-loop motion control and field tests

被引:0
|
作者
Feher, Arpad [1 ]
Domina, Adam [2 ]
Bardos, Adam [2 ]
Aradi, Szilard [1 ]
Becsi, Tamas [1 ]
机构
[1] Budapest Univ Technol & Econ, Fac Transportat Engn & Vehicle Engn, Dept Control Transportat & Vehicle Syst, Muegyet Rkp 3, H-1111 Budapest, Hungary
[2] Budapest Univ Technol & Econ, Dept Automot Technol, Fac Transportat Engn & Vehicle Engn, Muegyetem Rkp 3, H-1111 Budapest, Hungary
关键词
Vehicle dynamics; Advanced driver assistance systems; Machine learning; Reinforcement learning; Model predictive control; ACTIVE STEERING CONTROL; MODEL; SIMULATION; VEHICLES;
D O I
10.1016/j.engappai.2024.109870
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Performing evasive maneuvers with highly automated vehicles is a challenging task. The algorithm must fulfill safety constraints and complete the task while keeping the car in a controllable state. Furthermore, considering all aspects of vehicle dynamics, the path generation problem is numerically complex. Hence its classical solutions can hardly meet real-time requirements. On the other hand, single reinforcement learning based approaches only could handle this problem as a simple driving task and would not provide feasibility information on the whole task's horizon. Therefore, this paper presents a hierarchical method for obstacle avoidance of an automated vehicle to overcome this issue, where the geometric path generation is provided by a single-step continuous Reinforcement Learning agent, while a model-predictive controller deals with lateral control to perform a double lane change maneuver. As the agent plays the optimization role in this architecture, it is trained in various scenarios to provide the necessary parameters fora geometric path generator in a onestep neural network output. During the training, the controller that follows the track evaluates the feasibility of the generated path whose performance metrics provide feedback to the agent so it can further improve its performance. The framework can train an agent fora given problem with various parameters. Asa use case, it is presented as a static obstacle avoidance maneuver. the proposed framework was tested on an automotive proving ground with the geometric constraints of the ISO-3888-2 test. The results proved its real-time capability and performance compared to human drivers' abilities.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] REINFORCEMENT LEARNING FOR TUNING PARAMETERS OF CLOSED-LOOP CONTROLLERS
    Serafini, M. C.
    Rosales, N.
    Garelli, F.
    DIABETES TECHNOLOGY & THERAPEUTICS, 2021, 23 : A84 - A85
  • [22] A reinforcement learning method with closed-loop stability guarantee
    Osinenko, Pavel
    Beckenbach, Lukas
    Goehrt, Thomas
    Streif, Stefan
    IFAC PAPERSONLINE, 2020, 53 (02): : 8043 - 8048
  • [23] CSGP: Closed-Loop Safe Grasp Planning via Attention-Based Deep Reinforcement Learning From Demonstrations
    Tang, Zixin
    Shi, Yifei
    Xu, Xin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3158 - 3165
  • [24] Probe optimization for quantum metrology via closed-loop learning control
    Xiaodong Yang
    Jayne Thompson
    Ze Wu
    Mile Gu
    Xinhua Peng
    Jiangfeng Du
    npj Quantum Information, 6
  • [25] Probe optimization for quantum metrology via closed-loop learning control
    Yang, Xiaodong
    Thompson, Jayne
    Wu, Ze
    Gu, Mile
    Peng, Xinhua
    Du, Jiangfeng
    NPJ QUANTUM INFORMATION, 2020, 6 (01)
  • [26] Adaptive closed-loop maneuver planning for low-thrust spacecraft using reinforcement learning
    LaFarge, Nicholas B.
    Howell, Kathleen C.
    Folta, David C.
    ACTA ASTRONAUTICA, 2023, 211 : 142 - 154
  • [27] Closed-loop learning of visual control policies
    Jodogne, Sébastien
    Piater, Justus H.
    Journal of Artificial Intelligence Research, 1600, 28 : 349 - 391
  • [28] An architecture for the closed-loop control of droplet thermocapillary motion
    De Marchi, Alberto
    Hanczyc, Martin M.
    FOURTEENTH EUROPEAN CONFERENCE ON ARTIFICIAL LIFE (ECAL 2017), 2017, : 483 - 489
  • [29] Closed-Loop Control of the Motion of a Disk–Rod System
    Y. Yavin
    C. Frangos
    Journal of Optimization Theory and Applications, 1998, 96 : 453 - 473
  • [30] Closed-loop learning of visual control policies
    Jodogne, Sebastien
    Piater, Justus H.
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2007, 28 : 349 - 391