Multiple Trajectory Prediction with Deep Temporal and Spatial Convolutional Neural Networks

被引:22
|
作者
Strohbeck, Jan [1 ]
Belagiannis, Vasileios [1 ]
Mueller, Johannes [1 ]
Schreiber, Marcel [1 ]
Herrmann, Martin [1 ]
Wolf, Daniel [1 ]
Buchholz, Michael [1 ]
机构
[1] Ulm Univ, Inst Measurement Control & Microtechnol, D-89081 Ulm, Germany
基金
欧盟地平线“2020”;
关键词
D O I
10.1109/IROS45743.2020.9341327
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated vehicles need to not only perceive their environment, but also predict the possible future behavior of all detected traffic participants in order to safely navigate in complex scenarios and avoid critical situations, ranging from merging on highways to crossing urban intersections. Due to the availability of datasets with large numbers of recorded trajectories of traffic participants, deep learning based approaches can be used to model the behavior of road users. This paper proposes a convolutional network that operates on rasterized actor-centric images which encode the static and dynamic actor-environment. We predict multiple possible future trajectories for each traflic actor, which include position, velocity, acceleration, orientation, yaw rate and position uncertainty estimates. To make better use of the past movement of the actor, we propose to employ temporal convolutional networks (TCNs) and rely on uncertainties estimated from the previous object tracking stage. We evaluate our approach on the public "Argoverse Motion Forecasting" dataset, on which it won the first prize at the Argoverse Motion Forecasting Challenge, as presented on the NeurIPS 2019 workshop on "Machine Learning for Autonomous Driving".
引用
下载
收藏
页码:1992 / 1998
页数:7
相关论文
共 50 条
  • [1] Trajectory-Pooled Spatial-Temporal Architecture of Deep Convolutional Neural Networks for Video Event Detection
    Li, Yonggang
    Ge, Rui
    Ji, Yi
    Gong, Shengrong
    Liu, Chunping
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (09) : 2683 - 2692
  • [2] Pedestrian trajectory prediction with convolutional neural networks
    Zamboni, Simone
    Kefato, Zekarias Tilahun
    Girdzijauskas, Sarunas
    Noren, Christoffer
    Dal Col, Laura
    PATTERN RECOGNITION, 2022, 121
  • [3] DSTCNN: Deformable spatial-temporal convolutional neural network for pedestrian trajectory prediction
    Chen, Wangxing
    Sang, Haifeng
    Wang, Jinyu
    Zhao, Zishan
    INFORMATION SCIENCES, 2024, 666
  • [4] Exploring spatial-temporal relations via deep convolutional neural networks for traffic flow prediction with incomplete data
    Deng, Shaojiang
    Jia, Shuyuan
    Chen, Jing
    APPLIED SOFT COMPUTING, 2019, 78 : 712 - 721
  • [5] Spatial–temporal feature extraction based on convolutional neural networks for travel time prediction
    Chen A.C.H.
    Yang Y.-T.
    Applied Research, 2023, 2 (03):
  • [6] Spatial-Temporal Prediction of Vegetation Index With Deep Recurrent Neural Networks
    Yu, Wentao
    Li, Jing
    Liu, Qinhuo
    Zhao, Jing
    Dong, Yadong
    Wang, Cong
    Lin, Shangrong
    Zhu, Xinran
    Zhang, Hu
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [7] Interacting Vehicle Trajectory Prediction with Convolutional Recurrent Neural Networks
    Mukherjee, Saptarshi
    Wang, Sen
    Wallace, Andrew
    2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 4336 - 4342
  • [8] Trajectory Prediction with Attention-Based Spatial-Temporal Graph Convolutional Networks for Autonomous Driving
    Li, Hongbo
    Ren, Yilong
    Li, Kaixuan
    Chao, Wenjie
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [9] Spatial Channel Attention for Deep Convolutional Neural Networks
    Liu, Tonglai
    Luo, Ronghai
    Xu, Longqin
    Feng, Dachun
    Cao, Liang
    Liu, Shuangyin
    Guo, Jianjun
    MATHEMATICS, 2022, 10 (10)
  • [10] Spatial Pyramid Attention for Deep Convolutional Neural Networks
    Ma, Xu
    Guo, Jingda
    Sansom, Andrew
    McGuire, Mara
    Kalaani, Andrew
    Chen, Qi
    Tang, Sihai
    Yang, Qing
    Fu, Song
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 3048 - 3058