An End-to-End Framework of Road User Detection, Tracking, and Prediction from Monocular Images

被引:0
|
作者
Cheng, Hao [1 ]
Liu, Mengmeng [2 ]
Chen, Lin [3 ]
机构
[1] Univ Twente, Scene Understanding Grp, Enschede, Netherlands
[2] Leibniz Univ Hannover, Inst Cartog & Geoinformat, Hannover, Germany
[3] VISCODA GmbH, Schneiderberg 32, D-30167 Hannover, Germany
关键词
D O I
10.1109/ITSC57777.2023.10422634
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Perception that involves multi-object detection and tracking, and trajectory prediction are two major tasks of autonomous driving. However, they are currently mostly studied separately, which results in most trajectory prediction modules being developed based on ground truth trajectories without taking into account that trajectories extracted from the detection and tracking modules in real-world scenarios are noisy. These noisy trajectories can have a significant impact on the performance of the trajectory predictor and can lead to serious prediction errors. In this paper, we build an end-to-end framework for detection, tracking, and trajectory prediction called ODTP (Online Detection, Tracking and Prediction). It adopts the state-of-the-art online multi-object tracking model, QD-3DT, for perception and trains the trajectory predictor, DCENet++, directly based on the detection results without purely relying on ground truth trajectories. We evaluate the performance of ODTP on the widely used nuScenes dataset for autonomous driving. Extensive experiments show that ODPT achieves high performance end-to-end trajectory prediction. DCENet++, with the enhanced dynamic maps, predicts more accurate trajectories than its base model. It is also more robust when compared with other generative and deterministic trajectory prediction models trained on noisy detection results.
引用
收藏
页码:2178 / 2185
页数:8
相关论文
共 50 条
  • [1] An end-to-end framework for the detection of mathematical expressions in scientific document images
    Phong, Bui Hai
    Hoang, Thang Manh
    Le, Thi-Lan
    EXPERT SYSTEMS, 2022, 39 (01)
  • [2] End-to-End 6DoF Pose Estimation From Monocular RGB Images
    Zou, Wenbin
    Wu, Di
    Tian, Shishun
    Xiang, Canqun
    Li, Xia
    Zhang, Lu
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2021, 67 (01) : 87 - 96
  • [3] End-to-End Powerline Detection Based on Images from UAVs
    Hu, Jingwei
    He, Jing
    Guo, Chengjun
    REMOTE SENSING, 2023, 15 (06)
  • [4] AN END-TO-END DEEP LEARNING CHANGE DETECTION FRAMEWORK FOR REMOTE SENSING IMAGES
    Yang, Yi
    Gu, Haiyan
    Han, Yanshun
    Li, Haitao
    IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2020, : 652 - 655
  • [5] End-to-end DeepNCC framework for robust visual tracking
    Dai, Kaiheng
    Wang, Yuehuan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 70
  • [6] End-to-End Prediction of Lightning Events from Geostationary Satellite Images
    Brodehl, Sebastian
    Mueller, Richard
    Schoemer, Elmar
    Spichtinger, Peter
    Wand, Michael
    REMOTE SENSING, 2022, 14 (15)
  • [7] An intelligent framework for end-to-end rockfall detection
    Zoumpekas, Thanasis
    Puig, Anna
    Salamo, Maria
    Garcia-Selles, David
    Blanco Nunez, Laura
    Guinau, Marta
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2021, 36 (11) : 6471 - 6502
  • [8] End-to-End Algorithm for Recovering Human 3D Model from Monocular Images
    Liu, Yu
    Shi, Taichu
    Xu, Lexi
    Nie, Jingwen
    IEEE 20TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS / IEEE 16TH INTERNATIONAL CONFERENCE ON SMART CITY / IEEE 4TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (HPCC/SMARTCITY/DSS), 2018, : 1082 - 1087
  • [9] An End-to-End Practical System for Road Marking Detection
    Gu, Chaonan
    Wu, Xiaoyu
    Ma, He
    Yang, Lei
    IMAGE AND GRAPHICS, ICIG 2019, PT II, 2019, 11902 : 85 - 93
  • [10] AN END-TO-END FOOD PORTION ESTIMATION FRAMEWORK BASED ON SHAPE RECONSTRUCTION FROM MONOCULAR IMAGE
    Shao, Zeman
    Vinod, Gautham
    He, Jiangpeng
    Zhu, Fengqing
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 942 - 947