Estimation of personal driving style via deep inverse reinforcement learning

被引:2
|
作者
Kishikawa, Daiko [1 ]
Arai, Sachiyo [1 ]
机构
[1] Chiba Univ, Inage Ku, 1-33 Yayoi Cho, Chiba, Chiba 2638522, Japan
关键词
Autonomous driving; Driving style; Deep inverse reinforcement learning;
D O I
10.1007/s10015-021-00682-2
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
When applying autonomous driving technology in human-crewed vehicles, it is essential to consider the personal driving style with ensuring not only safety but also the driver's preference. Reinforcement learning (RL) has attracted much attention in the field of autonomous driving; however, it requires a finely tuned reward function. A method for tasks that are difficult to design reward functions, such as reproducing a personal driving style, is inverse reinforcement learning (IRL). Although IRL is commonly applied to the estimation of human and animal intentions, most previous methods require high computational costs to compute inner loop RL. For the problem of inner loop RL, Logistic Regression-Based IRL (LogReg-IRL), which does not require RL for reward estimation, is available. Moreover, LogReg-IRL can compute a value function as well as a reward function of the driver's own. Therefore, this paper proposes a method to estimate the latent driving preferences (called driving style) of a driver using the rewards and values obtained by applying LogReg-IRL. Several experimental results show that the proposed method could reproduce the original trajectory and quantify the driver's implicit preference.
引用
收藏
页码:338 / 346
页数:9
相关论文
共 50 条
  • [1] Estimation of personal driving style via deep inverse reinforcement learning
    Daiko Kishikawa
    Sachiyo Arai
    [J]. Artificial Life and Robotics, 2021, 26 : 338 - 346
  • [2] Comfortable Driving by using Deep Inverse Reinforcement Learning
    Kishikawa, Daiko
    Arai, Sachiyo
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON AGENTS (ICA), 2019, : 38 - 43
  • [3] Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving
    Rosbach, Sascha
    James, Vinit
    Grossjohann, Simon
    Homoceanu, Silviu
    Roth, Stefan
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 2658 - 2665
  • [4] Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
    Xie, Yuansheng
    Vosoughi, Soroush
    Hassanpour, Saeed
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 5067 - 5074
  • [5] Inverse Reinforcement Learning via Deep Gaussian Process
    Jin, Ming
    Damianou, Andreas
    Abbeel, Pieter
    Spanos, Costas
    [J]. CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [6] Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning
    Sun, Liting
    Zhan, Wei
    Tomizuka, Masayoshi
    [J]. 2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 2111 - 2117
  • [7] Decision Making for Autonomous Driving via Augmented Adversarial Inverse Reinforcement Learning
    Wang, Pin
    Liu, Dapeng
    Chen, Jiayu
    Li, Hanhan
    Chan, Ching-Yao
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 1036 - 1042
  • [8] Proactive Caching in Auto Driving Scene via Deep Reinforcement Learning
    Zhu, Zihui
    Zhang, Zhengming
    Yan, Wen
    Huang, Yongming
    Yang, Luxi
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [9] Deep reinforcement learning navigation via decision transformer in autonomous driving
    Ge, Lun
    Zhou, Xiaoguang
    Li, Yongqiang
    Wang, Yongcong
    [J]. FRONTIERS IN NEUROROBOTICS, 2024, 18
  • [10] Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning
    Lee, Keuntaek
    Isele, David
    Theodorou, Evangelos A.
    Bae, Sangjae
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 3194 - 3201