Multimodal based attention-pyramid for predicting pedestrian trajectory

被引:1
|
作者
Yan, Xue [1 ]
Yang, Jinfu [1 ,2 ]
Liu, Yubin [1 ]
Song, Lin [1 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
[2] Beijing Key Lab Computat Intelligence & Intellige, Beijing, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
trajectory prediction; attention mechanism; recurrent neural network; multimodal fusion;
D O I
10.1117/1.JEI.31.5.053008
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of pedestrian trajectory prediction is to predict the future trajectory according to the historical one of pedestrians. Multimodal information in the historical trajectory is conducive to perception and positioning, especially visual information and position coordinates. However, most of the current algorithms ignore the significance of multimodal information in the historical trajectory. We describe pedestrian trajectory prediction as a multimodal problem, in which historical trajectory is divided into an image and coordinate information. Specifically, we apply fully connected long short-term memory (FC-LSTM) and convolutional LSTM (ConvLSTM) to receive and process location coordinates and visual information respectively, and then fuse the information by a multimodal fusion module. Then, the attention pyramid social interaction module is built based on information fusion, to reason complex spatial and social relations between target and neighbors adaptively. The proposed approach is validated on different experimental verification tasks on which it can get better performance in terms of accuracy than other counterparts. (c) 2022 SPIE and IS&T
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Pedestrian trajectory prediction model with social features and attention
    Zhang Z.
    Diao Y.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2020, 47 (01): : 10 - 17and79
  • [22] A Recurrent Attention and Interaction Model for Pedestrian Trajectory Prediction
    Xuesong Li
    Yating Liu
    Kunfeng Wang
    Fei-Yue Wang
    IEEE/CAA Journal of Automatica Sinica, 2020, 7 (05) : 1361 - 1370
  • [23] A Recurrent Attention and Interaction Model for Pedestrian Trajectory Prediction
    Li, Xuesong
    Liu, Yating
    Wang, Kunfeng
    Wang, Fei-Yue
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2020, 7 (05) : 1361 - 1370
  • [24] Complementary Attention Gated Network for Pedestrian Trajectory Prediction
    Duan, Jinghai
    Wang, Le
    Long, Chengjiang
    Zhou, Sanping
    Zheng, Fang
    Shi, Liushuai
    Hua, Gang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 542 - 550
  • [25] Location-Velocity Attention for Pedestrian Trajectory Prediction
    Xue, Hao
    Huynh, Du Q.
    Reynolds, Mark
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 2038 - 2047
  • [26] A Multimodal Stepwise-Coordinating Framework for Pedestrian Trajectory Prediction
    Wang, Yijun
    Guo, Zekun
    Xu, Chang
    Lin, Jianxin
    SSRN,
  • [27] Representing Multimodal Behaviors With Mean Location for Pedestrian Trajectory Prediction
    Shi, Liushuai
    Wang, Le
    Long, Chengjiang
    Zhou, Sanping
    Tang, Wei
    Zheng, Nanning
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 11184 - 11202
  • [28] CONTEXT-AWARE PEDESTRIAN TRAJECTORY PREDICTION WITH MULTIMODAL TRANSFORMER
    Damirchi, Haleh
    Greenspan, Michael
    Etemad, Ali
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2535 - 2539
  • [29] Context-Aware Pedestrian Trajectory Prediction with Multimodal Transformer
    Damirchi, Haleh
    Greenspan, Michael
    Etemad, Ali
    Proceedings - International Conference on Image Processing, ICIP, 2023, : 2535 - 2539
  • [30] CONTEXT-AWARE PEDESTRIAN TRAJECTORY PREDICTION WITH MULTIMODAL TRANSFORMER
    Damirchi, Haleh
    Greenspan, Michael
    Etemad, Ali
    arXiv, 2023,