Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning

被引:17
|
作者
Zhou, Yang [1 ,2 ]
Fu, Rui [1 ]
Wang, Chang [1 ]
Zhang, Ruibin [1 ]
机构
[1] Changan Univ, Sch Automobile, Xian 710064, Peoples R China
[2] Xian Aeronaut Univ, Sch Vehicle Engn, Xian 710077, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
human-like car-following model; driving styles; generative adversarial imitation learning; gated recurrent units;
D O I
10.3390/s20185034
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Building a human-like car-following model that can accurately simulate drivers' car-following behaviors is helpful to the development of driving assistance systems and autonomous driving. Recent studies have shown the advantages of applying reinforcement learning methods in car-following modeling. However, a problem has remained where it is difficult to manually determine the reward function. This paper proposes a novel car-following model based on generative adversarial imitation learning. The proposed model can learn the strategy from drivers' demonstrations without specifying the reward. Gated recurrent units was incorporated in the actor-critic network to enable the model to use historical information. Drivers' car-following data collected by a test vehicle equipped with a millimeter-wave radar and controller area network acquisition card was used. The participants were divided into two driving styles by K-means with time-headway and time-headway when braking used as input features. Adopting five-fold cross-validation for model evaluation, the results show that the proposed model can reproduce drivers' car-following trajectories and driving styles more accurately than the intelligent driver model and the recurrent neural network-based model, with the lowest average spacing error (19.40%) and speed validation error (5.57%), as well as the lowest Kullback-Leibler divergences of the two indicators used for driving style clustering.
引用
收藏
页码:1 / 20
页数:19
相关论文
共 50 条
  • [31] Highly Automated Driving Impact on Drivers' Gaze Behaviors during a Car-Following Task
    Navarro, J.
    Osiurak, F.
    Ovigue, M.
    Charrier, L.
    Reynaud, E.
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2019, 35 (11) : 1008 - 1017
  • [32] A Review of Car-Following Models and Modeling Tools for Human and Autonomous-Ready Driving Behaviors in Micro-Simulation
    Ahmed, Hafiz Usman
    Huang, Ying
    Lu, Pan
    [J]. SMART CITIES, 2021, 4 (01): : 314 - 335
  • [33] Randomized Adversarial Imitation Learning for Autonomous Driving
    Shin, MyungJae
    Kim, Joongheon
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4590 - 4596
  • [34] Modeling car-following behavior on urban expressways in Shanghai: A naturalistic driving study
    Zhu, Meixin
    Wang, Xuesong
    Tarko, Andrew
    Fang, Shou'en
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2018, 93 : 425 - 445
  • [35] Risk Field Model of Driving and Its Application in Modeling Car-Following Behavior
    Tan, Haitian
    Lu, Guangquan
    Liu, Miaomiao
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 11605 - 11620
  • [36] Robot Manipulation Learning Using Generative Adversarial Imitation Learning
    Jabri, Mohamed Khalil
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4893 - 4894
  • [37] A Survey of Imitation Learning Based on Generative Adversarial Nets
    Lin J.-H.
    Zhang Z.-Z.
    Jiang C.
    Hao J.-Y.
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2020, 43 (02): : 326 - 351
  • [38] Generative Adversarial Imitation Learning from Failed Experiences
    Zhu, Jiacheng
    Lin, Jiahao
    Wang, Meng
    Chen, Yingfeng
    Fan, Changjie
    Jiang, Chong
    Zhang, Zongzhang
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13997 - 13998
  • [39] Ranking-Based Generative Adversarial Imitation Learning
    Shi, Zhipeng
    Zhang, Xuehe
    Fang, Yu
    Li, Changle
    Liu, Gangfeng
    Zhao, Jie
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (10): : 8967 - 8974
  • [40] TextGAIL: Generative Adversarial Imitation Learning for Text Generation
    Wu, Qingyang
    Li, Lei
    Yu, Zhou
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14067 - 14075