EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting

被引:8
|
作者
Bae, Inhwan [1 ]
Oh, Jean [2 ]
Jeon, Hae-Gon [1 ]
机构
[1] GIST AI Grad Sch, Gwangju, South Korea
[2] Carnegie Mellon Univ, Pittsburgh, PA USA
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ICCV51070.2023.00919
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Capturing high-dimensional social interactions and feasible futures is essential for predicting trajectories. To address this complex nature, several attempts have been devoted to reducing the dimensionality of the output variables via parametric curve fitting such as the Be ' zier curve and Bspline function. However, these functions, which originate in computer graphics fields, are not suitable to account for socially acceptable human dynamics. In this paper, we present EigenTrajectory (ET), a trajectory prediction approach that uses a novel trajectory descriptor to form a compact space, known here as ET space, in place of Euclidean space, for representing pedestrian movements. We first reduce the complexity of the trajectory descriptor via a low-rank approximation. We transform the pedestrians' history paths into our ET space represented by spatio-temporal principle components, and feed them into off-the-shelf trajectory forecasting models. The inputs and outputs of the models as well as social interactions are all gathered and aggregated in the corresponding ET space. Lastly, we propose a trajectory anchor-based refinement method to cover all possible futures in the proposed ET space. Extensive experiments demonstrate that our EigenTrajectory predictor can significantly improve both the prediction accuracy and reliability of existing trajectory forecasting models on public benchmarks, indicating that the proposed descriptor is suited to represent pedestrian behaviors. Code is publicly available at https: //github.com/inhwanbae/EigenTrajectory.
引用
收藏
页码:9983 / 9995
页数:13
相关论文
共 50 条
  • [21] Multi-Modal Contextualization of Trajectory Data for Advanced Analysis
    Walther, Paul
    Deuser, Fabian
    Werner, Martin
    Datenbank-Spektrum, 2024, 24 (03) : 223 - 231
  • [22] Multi-perspective Multi-modal Trajectory Descriptions for Handwritten Strokes
    Parvez, Mohammad Tanvir
    Haque, Sardar Anisul
    PROCEEDINGS 2018 16TH INTERNATIONAL CONFERENCE ON FRONTIERS IN HANDWRITING RECOGNITION (ICFHR), 2018, : 285 - 290
  • [23] Video stabilization based on low-rank constraint and trajectory optimization
    Shang, Zhenhong
    Chu, Zhishuang
    IET IMAGE PROCESSING, 2024, 18 (07) : 1768 - 1779
  • [24] Exploring Complex Dependencies for Multi-modal Semantic Trajectory Prediction
    Liu, Jie
    Zhang, Lei
    Zhu, Shaojie
    Liu, Bailong
    Liang, Zhizheng
    Yang, Susong
    NEURAL PROCESSING LETTERS, 2022, 54 (02) : 961 - 985
  • [25] Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction
    Zhi, Weiming
    Ott, Lionel
    Ramos, Fabio
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [26] Unsupervised Trajectory Segmentation and Promoting of Multi-Modal Surgical Demonstrations
    Shao, Zhenzhou
    Zhao, Hongfa
    Xie, Jiexin
    Qu, Ying
    Guan, Yong
    Tan, Jindong
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 777 - 782
  • [27] Identification of parking spaces from multi-modal trajectory data
    Dey, Subhrasankha
    Winter, Stephan
    Goel, Salil
    Tomko, Martin
    TRANSACTIONS IN GIS, 2021, 25 (06) : 3088 - 3118
  • [28] Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction
    Bae, Inhwan
    Park, Jin-Hwi
    Jeon, Hae-Gon
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 270 - 289
  • [29] Multi-modal vehicle trajectory prediction based on mutual information
    Fei, Cong
    He, Xiangkun
    Ji, Xuewu
    IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (03) : 148 - 153
  • [30] Exploring Complex Dependencies for Multi-modal Semantic Trajectory Prediction
    Jie Liu
    Lei Zhang
    Shaojie Zhu
    Bailong Liu
    Zhizheng Liang
    Susong Yang
    Neural Processing Letters, 2022, 54 : 961 - 985