Egocentric Human Trajectory Forecasting With a Wearable Camera and Multi-Modal Fusion

被引:6
|
作者
Qiu, Jianing [1 ]
Chen, Lipeng [2 ]
Gu, Xiao [1 ]
Lo, Frank P-W [1 ]
Tsai, Ya-Yen [1 ]
Sun, Jiankai [2 ,3 ]
Liu, Jiaqi [2 ,4 ]
Lo, Benny [1 ]
机构
[1] Imperial Coll London, Hamlyn Ctr Robot Surg, London SW7 2AZ, England
[2] Tencent Robot X, Shenzhen 518057, Peoples R China
[3] Stanford Univ, Dept Aeronaut & Astronaut, Stanford, CA 94305 USA
[4] Shanghai Jiao Tong Univ, Inst Med Robot, Shanghai 200240, Peoples R China
关键词
Human trajectory forecasting; egocentric vision; multi-modal learning;
D O I
10.1109/LRA.2022.3188101
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we address the problem of forecasting the trajectory of an egocentric camera wearer (ego-person) in crowded spaces. The trajectory forecasting ability learned from the data of different camera wearers walking around in the real world can he transferred to assist visually impaired people in navigation, as well as to instill human navigation behaviours in mobile robots, enabling better human-robot interactions. To this end, a novel egocentric human trajectory forecasting dataset was constructed, containing real trajectories of people navigating in crowded spaces wearing a camera, as well as extracted rich contextual data. We extract and utilize three different modalities to forecast the trajectory of the camera wearer, i.e., his/her past trajectory, the past trajectories of nearby people, and the environment such as the scene semantics or the depth of the scene. A Transformer-based encoder-decoder neural network model, integrated with a novel cascaded cross-attention mechanism that fuses multiple modalities, has been designed to predict the future trajectory of the camera wearer. Extensive experiments have been conducted, with results showing that our model outperforms the state-of-the-art methods in egocentric human trajectory forecasting.
引用
收藏
页码:8799 / 8806
页数:8
相关论文
共 50 条
  • [1] Hierarchical Latent Structure for Multi-modal Vehicle Trajectory Forecasting
    Choi, Dooseop
    Min, KyoungWook
    COMPUTER VISION, ECCV 2022, PT XXII, 2022, 13682 : 129 - 145
  • [2] Hierarchical Latent Structure for Multi-modal Vehicle Trajectory Forecasting
    Choi, Dooseop
    Min, KyoungWook
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13682 LNCS : 129 - 145
  • [3] Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview
    Luis Bustamante, Alvaro
    Molina, Jose M.
    Patricio, Miguel A.
    DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE, 2010, 79 : 301 - 308
  • [4] Special issue on multi-camera and multi-modal sensor fusion
    Cavallaro, Andrea
    Aghajan, Hamid
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2010, 114 (06) : 609 - 610
  • [5] Collaborative Uncertainty Benefits Multi-Agent Multi-Modal Trajectory Forecasting
    Tang, Bohan
    Zhong, Yiqi
    Xu, Chenxin
    Wu, Wei-Tao
    Neumann, Ulrich
    Zhang, Ya
    Chen, Siheng
    Wang, Yanfeng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 13297 - 13313
  • [6] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [7] EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting
    Bae, Inhwan
    Oh, Jean
    Jeon, Hae-Gon
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9983 - 9995
  • [8] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [9] Loneliness Forecasting Using Multi-modal Wearable and Mobile Sensing in Everyday Settings
    Yang, Zhongqi
    Azimi, Iman
    Jafarlou, Salar
    Labbaf, Sina
    Borelli, Jessica
    Dutt, Nikil
    Rahmani, Amir M.
    2023 IEEE 19TH INTERNATIONAL CONFERENCE ON BODY SENSOR NETWORKS, BSN, 2023,
  • [10] Human activity recognition based on multi-modal fusion
    Zhang, Cheng
    Zu, Tianqi
    Hou, Yibin
    He, Jian
    Yang, Shengqi
    Dong, Ruihai
    CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (03) : 321 - 332