Towards robust car-following based on deep reinforcement learning

被引:2
|
作者
Hart, Fabian [1 ]
Okhrin, Ostap [1 ,2 ]
Treiber, Martin [1 ]
机构
[1] Tech Univ Dresden, D-01062 Dresden, Germany
[2] Ctr Scalable Data Analyt & Artificial Intelligence, Leipzig, Germany
关键词
Reinforcement learning; Car-following model; Generalization capabilities; String stability; Validation; SAFETY; MODEL;
D O I
10.1016/j.trc.2024.104486
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
One of the biggest challenges in the development of learning -driven automated driving technologies remains the handling of uncommon, rare events that may have not been encountered in training. Especially when training a model with real driving data, unusual situations, such as emergency brakings, may be underrepresented, resulting in a model that lacks robustness in rare events. This study focuses on car -following based on reinforcement learning and demonstrates that existing approaches, trained with real driving data, fail to handle safety-critical situations. Since collecting data representing all kinds of possible car -following events, including safety- critical situations, is challenging, we propose a training environment that harnesses stochastic processes to generate diverse and challenging scenarios. Our experiments show that training with real data can lead to models that collide in safety- critical situations, whereas the proposed model exhibits excellent performance and remains accident -free, comfortable, and string -stable even in extreme scenarios, such as full -braking by the leading vehicle. Its robustness is demonstrated by simulating car -following scenarios for various reward function parametrizations and a diverse range of artificial and real leader data that were not included in training and were qualitatively different from the learning data. We further show that conventional reward designs can encourage aggressive behavior when approaching other vehicles. Additionally, we compared the proposed model with classical car -following models and found it to achieve equal or superior results.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Driver Car-Following Model Based on Deep Reinforcement Learning
    Guo, Jinghua
    Li, Wenchang
    Luo, Yugong
    Chen, Tao
    Li, Keqiang
    [J]. Qiche Gongcheng/Automotive Engineering, 2021, 43 (04): : 571 - 579
  • [2] A Car-following Control Algorithm Based on Deep Reinforcement Learning
    Zhu, Bing
    Jiang, Yuan-De
    Zhao, Jian
    Chen, Hong
    Deng, Wei-Wen
    [J]. Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2019, 32 (06): : 53 - 60
  • [3] Deep Reinforcement Learning Car-Following Control Based on Multivehicle Motion Prediction
    Wang, Tao
    Qu, Dayi
    Wang, Kedong
    Dai, Shouchen
    [J]. ELECTRONICS, 2024, 13 (06)
  • [4] Proactive Car-Following Using Deep-Reinforcement Learning
    Yen, Yi-Tung
    Chou, Jyun-Jhe
    Shih, Chi-Sheng
    Chen, Chih-Wei
    Tsung, Pei-Kuei
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [5] Dynamic Car-following Model Calibration with Deep Reinforcement Learning
    Naing, Htet
    Cai, Wentong
    Wu, Tiantian
    Yu, Liang
    [J]. 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 959 - 966
  • [6] Improved deep reinforcement learning for car-following decision-making
    Yang, Xiaoxue
    Zou, Yajie
    Zhang, Hao
    Qu, Xiaobo
    Chen, Lei
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2023, 624
  • [7] Modelling personalised car-following behaviour: a memory-based deep reinforcement learning approach
    Liao, Yaping
    Yu, Guizhen
    Chen, Peng
    Zhou, Bin
    Li, Han
    [J]. TRANSPORTMETRICA A-TRANSPORT SCIENCE, 2024, 20 (01) : 36 - 36
  • [8] Car-Following Behavior Modeling With Maximum Entropy Deep Inverse Reinforcement Learning
    Nan, Jiangfeng
    Deng, Weiwen
    Zhang, Ruzheng
    Zhao, Rui
    Wang, Ying
    Ding, Juan
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (02): : 3998 - 4010
  • [9] Human-like autonomous car-following model with deep reinforcement learning
    Zhu, Meixin
    Wang, Xuesong
    Wang, Yinhai
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2018, 97 : 348 - 368
  • [10] Deep Reinforcement Learning Car-Following Model Considering Longitudinal and Lateral Control
    Qin, Pinpin
    Tan, Hongyun
    Li, Hao
    Wen, Xuguang
    [J]. SUSTAINABILITY, 2022, 14 (24)