Visual-Motion-Interaction-Guided Pedestrian Intention Prediction Framework

被引:6
|
作者
Sharma, Neha [1 ]
Dhiman, Chhavi [1 ]
Indu, S. [1 ]
机构
[1] Delhi Technol Univ DTU, Dept Elect & Commun & Engn, Delhi 110042, India
关键词
Autonomous vehicles (AVs); intention prediction; pedestrians;
D O I
10.1109/JSEN.2023.3317426
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The capability to comprehend the intention of pedestrians on the road is one of the most crucial skills that the current autonomous vehicles (AVs) are striving for, to become fully autonomous. In recent years, multi-modal methods have gained traction employing trajectory, appearance, and context for predicting pedestrian crossing intention. However, most existing research works still lag rich feature representational ability in a multimodal scenario, restricting their performance. Moreover, less emphasis is put on pedestrian interactions with the surroundings for predicting short-term pedestrian intention in a challenging ego-centric vision. To address these challenges, an efficient visual-motion-interaction-guided (VMI) intention prediction framework has been proposed. This framework comprises visual encoder (VE), motion encoder (ME), and interaction encoder (IE) to capture rich multimodal features of the pedestrian and its interactions with the surroundings, followed by temporal attention and adaptive fusion (AF) module (AFM) to integrate these multimodal features efficiently. The proposed framework outperforms several SOTA on benchmark datasets: Pedestrian Intention Estimation (PIE)/Joint Attention in Autonomous Driving (JAAD) with accuracy, AUC, F1-score, precision, and recall as 0.92/0.89, 0.91/0.90, 0.87/0.81, 0.86/0.79, and 0.88/0.83, respectively. Furthermore, extensive experiments are carried out to investigate different fusion architectures and design parameters of all encoders. The proposed VMI framework predicts pedestrian crossing intention 2.5 s ahead of the crossing event. Code is available at: https://github.com/neha013/VMI.git.
引用
收藏
页码:27540 / 27548
页数:9
相关论文
共 50 条
  • [21] A Computationally Efficient Model for Pedestrian Motion Prediction
    Batkovic, Ivo
    Zanon, Mario
    Lubbe, Nils
    Falcone, Paolo
    2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 375 - 380
  • [22] Research on Pedestrian Crossing Intention Prediction Based on Deep Learning
    Huo, Chunbao
    Ma, Jie
    Tong, Zhibo
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 282 - 287
  • [23] Modeling the impact of interaction on pedestrian group motion
    Yucel, Z.
    Zanlungo, F.
    Shiomi, M.
    ADVANCED ROBOTICS, 2018, 32 (03) : 137 - 147
  • [24] Pedestrian Trajectory Prediction Based on an Intention Randomness Influence Strategy
    Deng, Yingjian
    Zhang, Li
    Chen, Jie
    Deng, Yu
    Liu, Jing
    ELECTRONICS, 2024, 13 (11)
  • [25] Local and Global Contextual Features Fusion for Pedestrian Intention Prediction
    Azarmi, Mohsen
    Rezaei, Mahdi
    Hussain, Tanveer
    Qian, Chenghao
    Communications in Computer and Information Science, 2023, 1883 CCIS : 1 - 13
  • [26] Do They Want to Cross? Understanding Pedestrian Intention for Behavior Prediction
    Kotseruba, Iuliia
    Rasouli, Amir
    Tsotsos, John K.
    2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1688 - 1693
  • [27] Multi-Input Fusion for Practical Pedestrian Intention Prediction
    Singh, Ankur
    Suddamalla, Upendra
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 2304 - 2311
  • [28] Prediction of Vehicular Yielding Intention While Approaching a Pedestrian Crosswalk
    Muduli, Kaliprasana
    Ghosh, Indrajit
    TRANSPORTATION RESEARCH RECORD, 2024,
  • [29] Local and Global Contextual Features Fusion for Pedestrian Intention Prediction
    Azarmi, Mohsen
    Rezaei, Mahdi
    Hussain, Tanveer
    Qian, Chenghao
    arXiv, 2023,
  • [30] INTERACTION OF ROAD NETWORKS AND PEDESTRIAN MOTION AT CROSSWALKS
    Borsche, Raul
    Meurer, Anne
    DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2014, 7 (03): : 363 - 377