Probabilistic multi-modal expected trajectory prediction based on LSTM for autonomous driving

被引:6
|
作者
Gao, Zhenhai [1 ]
Bao, Mingxi [1 ]
Gao, Fei [1 ,2 ]
Tang, Minghong [1 ]
机构
[1] Jilin Univ, Sch Vehicle Engn, State Key Lab Automot Simulat & Control, Changchun, Peoples R China
[2] Jilin Univ, Sch Vehicle Engn, State Key Lab Automot Simulat & Control, 5988 Renmin Rd, Changchun 130022, Peoples R China
基金
中国国家自然科学基金;
关键词
Trajectory prediction; behavioral intent recognition; LSTM; interactive behavior; MODEL;
D O I
10.1177/09544070231167906
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
Autonomous vehicles (AVs) need to adequately predict the trajectory space of surrounding vehicles (SVs) in order to make reasonable decision-making and improve driving safety. In this paper, we build the driving behavior intention recognition module and traffic vehicle expected trajectory prediction module by deep learning. On the one hand, the driving behavior intention recognition module identifies the probabilities of lane keeping, left lane changing, right lane changing, left acceleration lane changing, and right acceleration lane changing of the predicted vehicle. On the other hand, the expected trajectory prediction module adopts an encoder-decoder architecture, in which the encoder encodes the historical environment information of the surrounding agents as a context vector, and the decoder and MDN network combine the context vector and the identified driving behavior intention to predict the probability distribution of future trajectories. Additionally, our model produces the multiple behaviors and trajectories that may occur in the next 6 s for the predicted vehicle (PV). The proposed model is trained, validated and tested with the HighD dataset. The experimental results show that the constructed probabilistic multi-modal expected trajectory prediction possesses high accuracy in the intention recognition module with full consideration of interactive information. At the same time, the multi-modal probability distribution generated by the anticipated trajectory prediction model is more consistent with the real trajectories, which significantly improves the trajectory prediction accuracy compared with other approaches and has apparent advantages in predicting long-term domain trajectories.
引用
收藏
页码:2817 / 2828
页数:12
相关论文
共 50 条
  • [21] Multi-modal Intention Prediction with Probabilistic Movement Primitives
    Dermy, Oriane
    Charpillet, Francois
    Ivaldi, Serena
    HUMAN FRIENDLY ROBOTICS, 2019, 7 : 181 - 196
  • [22] Towards Autonomous Driving: a Multi-Modal 360° Perception Proposal
    Beltran, Jorge
    Guindel, Carlos
    Cortes, Irene
    Barrera, Alejandro
    Astudillo, Armando
    Urdiale, Jesus
    Alvarez, Mario
    Bekka, Farid
    Milanes, Vicente
    Garcia, Fernando
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [23] Multi-modal Pedestrian Trajectory Prediction based on Pedestrian Intention for Intelligent Vehicle
    He, Youguo
    Sun, Yizhi
    Cai, Yingfeng
    Yuan, Chaochun
    Shen, Jie
    Tian, Liwei
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2024, 18 (06): : 1562 - 1582
  • [24] An intention-based multi-modal trajectory prediction framework for overtaking maneuver
    Zhang, Mingfang
    Liu, Ying
    Li, Huajian
    Wang, Li
    Wang, Pangwei
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 2850 - 2855
  • [25] Anticipating Autonomous Vehicle Driving based on Multi-Modal Multiple Motion Tasks Network
    Abida Khanum
    Chao-Yang Lee
    Chih-Chung Hus
    Chu-Sing Yang
    Journal of Intelligent & Robotic Systems, 2022, 105
  • [26] Multi-modal information fusion for multi-task end-to-end behavior prediction in autonomous driving
    Guo, Baicang
    Liu, Hao
    Yang, Xiao
    Cao, Yuan
    Jin, Lisheng
    Wang, Yinlin
    NEUROCOMPUTING, 2025, 634
  • [27] Anticipating Autonomous Vehicle Driving based on Multi-Modal Multiple Motion Tasks Network
    Khanum, Abida
    Lee, Chao-Yang
    Hus, Chih-Chung
    Yang, Chu-Sing
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (03)
  • [28] Exploring Complex Dependencies for Multi-modal Semantic Trajectory Prediction
    Liu, Jie
    Zhang, Lei
    Zhu, Shaojie
    Liu, Bailong
    Liang, Zhizheng
    Yang, Susong
    NEURAL PROCESSING LETTERS, 2022, 54 (02) : 961 - 985
  • [29] Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction
    Bae, Inhwan
    Park, Jin-Hwi
    Jeon, Hae-Gon
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 270 - 289
  • [30] Exploring Complex Dependencies for Multi-modal Semantic Trajectory Prediction
    Jie Liu
    Lei Zhang
    Shaojie Zhu
    Bailong Liu
    Zhizheng Liang
    Susong Yang
    Neural Processing Letters, 2022, 54 : 961 - 985