RRT-based maximum entropy inverse reinforcement learning for robust and efficient driving behavior prediction

被引:1
|
作者
Hosoma, Shinpei [1 ]
Sugasaki, Masato [1 ]
Arie, Hiroaki [2 ]
Shimosaka, Masamichi [1 ]
机构
[1] Tokyo Inst Technol, Dept Comp Sci, Tokyo, Japan
[2] DENSO Corp, Tokyo, Japan
关键词
D O I
10.1109/IV51971.2022.9827039
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Advanced driver assistance systems have gained popularity as a safe technology that helps people avoid traffic accidents. To improve system reliability, a lot of research on driving behavior prediction has been extensively researched. Inverse reinforcement learning (IRL) is known as a prominent approach because it can directly learn complicated behaviors from expert demonstrations. Because driving data tend to have a couple of optimal behaviors from the drivers' preferences, i.e., sub-optimality issue, maximum entropy IRL has been getting attention with their capability of considering suboptimality. While accurate modeling and prediction can be expected, standard maximum entropy IRL needs to calculate the partition function, which requires large computational costs. Thus, it is not straightforward to apply this model to a high-dimensional space for detailed car modeling. In addition, existing research attempts to reduce these costs by approximating maximum entropy IRL; however, a combination of the efficient path planning and the proper parameter updating is required for an accurate approximation, and existing methods have not achieved them. In this study, we leverage a rapidly-exploring random tree (RRT) motion planner. With the RRT planner, we propose novel importance sampling for an accurate approximation from the generated trees. This ensures a stable and fast IRL model in a large high-dimensional space. Experimental results on artificial environments show that our approach improves stability and is faster than the existing IRL methods.
引用
收藏
页码:1353 / 1359
页数:7
相关论文
共 50 条
  • [1] Efficient Sampling-Based Maximum Entropy Inverse Reinforcement Learning With Application to Autonomous Driving
    Wu, Zheng
    Sun, Liting
    Zhan, Wei
    Yang, Chenyu
    Tomizuka, Masayoshi
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 5355 - 5362
  • [2] Modeling framework of human driving behavior based on Deep Maximum Entropy Inverse Reinforcement Learning
    Wang, Yongjie
    Niu, Yuchen
    Xiao, Mei
    Zhu, Wenying
    You, Xinshang
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 652
  • [3] Maximum Entropy Inverse Reinforcement Learning Based on Frenet Frame Sampling for Human-like Autonomous Driving
    Zhang, Tangyike
    Sun, Shuo
    Shi, Jiamin
    Chen, Shitao
    Ang, Marcelo H.
    Xin, Jingmin
    Zheng, Nanning
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 1820 - 1827
  • [4] Maximum Entropy Inverse Reinforcement Learning Using Monte Carlo Tree Search for Autonomous Driving
    da Silva, Junior Anderson Rodrigues
    Grassi Jr, Valdir
    Wolf, Denis Fernando
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, : 1 - 0
  • [5] Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning
    Sun, Liting
    Zhan, Wei
    Tomizuka, Masayoshi
    [J]. 2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 2111 - 2117
  • [6] Fast Inverse Reinforcement Learning with Interval Consistent Graph for Driving Behavior Prediction
    Shimosaka, Masamichi
    Sato, Junichi
    Takenaka, Kazuhito
    Hitomi, Kentarou
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1532 - 1538
  • [7] Car-Following Behavior Modeling With Maximum Entropy Deep Inverse Reinforcement Learning
    Nan, Jiangfeng
    Deng, Weiwen
    Zhang, Ruzheng
    Zhao, Rui
    Wang, Ying
    Ding, Juan
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (02): : 3998 - 4010
  • [8] Comparison and Deduction of Maximum Entropy Deep Inverse Reinforcement Learning
    Chen, Guannan
    Fu, Yanfang
    Liu, Yu
    Dang, Xiangbin
    Hao, Jiajun
    Liu, Xinchen
    [J]. 2023 IEEE 2ND INDUSTRIAL ELECTRONICS SOCIETY ANNUAL ON-LINE CONFERENCE, ONCON, 2023,
  • [9] Maximum Entropy Semi-Supervised Inverse Reinforcement Learning
    Audiffren, Julien
    Valko, Michal
    Lazaric, Alessandro
    Ghavamzadeh, Mohammad
    [J]. PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3315 - 3321
  • [10] Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning
    Zhou, Yang
    Fu, Rui
    Wang, Chang
    [J]. JOURNAL OF ADVANCED TRANSPORTATION, 2020, 2020