Egocentric Human Trajectory Forecasting With a Wearable Camera and Multi-Modal Fusion

被引:6
|
作者
Qiu, Jianing [1 ]
Chen, Lipeng [2 ]
Gu, Xiao [1 ]
Lo, Frank P-W [1 ]
Tsai, Ya-Yen [1 ]
Sun, Jiankai [2 ,3 ]
Liu, Jiaqi [2 ,4 ]
Lo, Benny [1 ]
机构
[1] Imperial Coll London, Hamlyn Ctr Robot Surg, London SW7 2AZ, England
[2] Tencent Robot X, Shenzhen 518057, Peoples R China
[3] Stanford Univ, Dept Aeronaut & Astronaut, Stanford, CA 94305 USA
[4] Shanghai Jiao Tong Univ, Inst Med Robot, Shanghai 200240, Peoples R China
关键词
Human trajectory forecasting; egocentric vision; multi-modal learning;
D O I
10.1109/LRA.2022.3188101
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this letter, we address the problem of forecasting the trajectory of an egocentric camera wearer (ego-person) in crowded spaces. The trajectory forecasting ability learned from the data of different camera wearers walking around in the real world can he transferred to assist visually impaired people in navigation, as well as to instill human navigation behaviours in mobile robots, enabling better human-robot interactions. To this end, a novel egocentric human trajectory forecasting dataset was constructed, containing real trajectories of people navigating in crowded spaces wearing a camera, as well as extracted rich contextual data. We extract and utilize three different modalities to forecast the trajectory of the camera wearer, i.e., his/her past trajectory, the past trajectories of nearby people, and the environment such as the scene semantics or the depth of the scene. A Transformer-based encoder-decoder neural network model, integrated with a novel cascaded cross-attention mechanism that fuses multiple modalities, has been designed to predict the future trajectory of the camera wearer. Extensive experiments have been conducted, with results showing that our model outperforms the state-of-the-art methods in egocentric human trajectory forecasting.
引用
收藏
页码:8799 / 8806
页数:8
相关论文
共 50 条
  • [31] DFMM-Precip: Deep Fusion of Multi-Modal Data for Accurate Precipitation Forecasting
    Li, Jinwen
    Wu, Li
    Liu, Jiarui
    Wang, Xiaoying
    Xue, Wei
    Water (Switzerland), 2024, 16 (24)
  • [32] Multi-modal transform-based fusion model for new product sales forecasting
    Li, Xiangzhen
    Shen, Jiaxing
    Wang, Dezhi
    Lu, Wu
    Chen, Yuanyi
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [33] Decoding Human Intent Using a Wearable System and Multi-Modal Sensor Data
    Geyik, Cemil S.
    Dutta, Arindam
    Ogras, Umit Y.
    Bliss, Daniel W.
    2016 50TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, 2016, : 846 - 850
  • [34] Towards Continual Egocentric Activity Recognition: A Multi-Modal Egocentric Activity Dataset for Continual Learning
    Xu, Linfeng
    Wu, Qingbo
    Pan, Lili
    Meng, Fanman
    Li, Hongliang
    He, Chiyuan
    Wang, Hanxin
    Cheng, Shaoxu
    Dai, Yu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2430 - 2443
  • [35] Multi-modal fusion method for human action recognition based on IALC
    Zhang, Yinhuan
    Xiao, Qinkun
    Liu, Xing
    Wei, Yongquan
    Chu, Chaoqin
    Xue, Jingyun
    IET IMAGE PROCESSING, 2023, 17 (02) : 388 - 400
  • [36] Multi-Modal Temporal Convolutional Network for Anticipating Actions in Egocentric Videos
    Zatsarynna, Olga
    Abu Farha, Yazan
    Gall, Juergen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2249 - 2258
  • [37] Multi-modal face tracking in multi-camera environments
    Kang, HB
    Cho, SH
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, PROCEEDINGS, 2005, 3691 : 814 - 821
  • [38] A wearable multi-modal acoustic system for breathing analysis
    Emokpae, Lloyd E.
    Emokpae, Roland N., Jr.
    Bowry, Ese
    Bin Saif, Jaeed
    Mahmud, Muntasir
    Lalouani, Wassila
    Younis, Mohamed
    Joyner, Robert L., Jr.
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2022, 151 (02): : 1033 - 1038
  • [39] A wearable multi-modal acoustic system for breathing analysisa)
    Emokpae, Lloyd E.
    Emokpae, Roland N.
    Bowry, Ese
    Bin Saif, Jaeed
    Mahmud, Muntasir
    Lalouani, Wassila
    Younis, Mohamed
    Joyner, Robert L.
    Journal of the Acoustical Society of America, 2022, 151 (02): : 1033 - 1038
  • [40] Stimulus Verification is a Universal and Effective Sampler in Multi-modal Human Trajectory Prediction
    Sun, Jianhua
    Li, Yuxuan
    Chai, Liang
    Lu, Cewu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22014 - 22023