Learning the Car-following Behavior of Drivers Using Maximum Entropy Deep Inverse Reinforcement Learning

被引:20
|
作者
Zhou, Yang [1 ,2 ]
Fu, Rui [1 ]
Wang, Chang [1 ]
机构
[1] Changan Univ, Sch Automobile, Middle Sect Nan Erhuan Rd, Xian 710064, Peoples R China
[2] Xian Aeronaut Univ, Sch Vehicle Engn, 259,Xi Erhuan Rd, Xian 710077, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
AUTONOMOUS VEHICLES; MODEL; ALGORITHM;
D O I
10.1155/2020/4752651
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
The present study proposes a framework for learning the car-following behavior of drivers based on maximum entropy deep inverse reinforcement learning. The proposed framework enables learning the reward function, which is represented by a fully connected neural network, from driving data, including the speed of the driver's vehicle, the distance to the leading vehicle, and the relative speed. Data from two field tests with 42 drivers are used. After clustering the participants into aggressive and conservative groups, the car-following data were used to train the proposed model, a fully connected neural network model, and a recurrent neural network model. Adopting the fivefold cross-validation method, the proposed model was proved to have the lowest root mean squared percentage error and modified Hausdorff distance among the different models, exhibiting superior ability for reproducing drivers' car-following behaviors. Moreover, the proposed model captured the characteristics of different driving styles during car-following scenarios. The learned rewards and strategies were consistent with the demonstrations of the two groups. Inverse reinforcement learning can serve as a new tool to explain and model driving behavior, providing references for the development of human-like autonomous driving models.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Car-Following Behavior Modeling With Maximum Entropy Deep Inverse Reinforcement Learning
    Nan, Jiangfeng
    Deng, Weiwen
    Zhang, Ruzheng
    Zhao, Rui
    Wang, Ying
    Ding, Juan
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (02): : 3998 - 4010
  • [2] Proactive Car-Following Using Deep-Reinforcement Learning
    Yen, Yi-Tung
    Chou, Jyun-Jhe
    Shih, Chi-Sheng
    Chen, Chih-Wei
    Tsung, Pei-Kuei
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [3] A Car-following Control Algorithm Based on Deep Reinforcement Learning
    Zhu B.
    Jiang Y.-D.
    Zhao J.
    Chen H.
    Deng W.-W.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2019, 32 (06): : 53 - 60
  • [4] Towards robust car-following based on deep reinforcement learning
    Hart, Fabian
    Okhrin, Ostap
    Treiber, Martin
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 159
  • [5] Driver Car-Following Model Based on Deep Reinforcement Learning
    Guo J.
    Li W.
    Luo Y.
    Chen T.
    Li K.
    Guo, Jinghua (guojh@xmu.edu.cn), 1600, SAE-China (43): : 571 - 579
  • [6] Dynamic Car-following Model Calibration with Deep Reinforcement Learning
    Naing, Htet
    Cai, Wentong
    Wu, Tiantian
    Yu, Liang
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 959 - 966
  • [7] MEDIRL: Predicting the Visual Attention of Drivers via Maximum Entropy Deep Inverse Reinforcement Learning
    Baee, Sonia
    Pakdamanian, Erfan
    Kim, Inki
    Feng, Lu
    Ordonez, Vicente
    Barnes, Laura
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13158 - 13168
  • [8] Velocity control in car-following behavior with autonomous vehicles using reinforcement learning
    Wang, Zhe
    Huang, Helai
    Tang, Jinjun
    Meng, Xianwei
    Hu, Lipeng
    ACCIDENT ANALYSIS AND PREVENTION, 2022, 174
  • [9] Comparison and Deduction of Maximum Entropy Deep Inverse Reinforcement Learning
    Chen, Guannan
    Fu, Yanfang
    Liu, Yu
    Dang, Xiangbin
    Hao, Jiajun
    Liu, Xinchen
    2023 IEEE 2ND INDUSTRIAL ELECTRONICS SOCIETY ANNUAL ON-LINE CONFERENCE, ONCON, 2023,
  • [10] Continuous Deep Maximum Entropy Inverse Reinforcement Learning using online POMDP
    Silva, Junior A. R.
    Grassi Jr, Valdir
    Wolf, Denis Fernando
    2019 19TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2019, : 382 - 387