Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving

被引:18
|
作者
Li, Peixuan [1 ]
Jin, Jieyu [1 ]
机构
[1] SAIC PP CEM, Shanghai, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.00386
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While separately leveraging monocular 3D object detection and 2D multi-object tracking can be straightforwardly applied to sequence images in a frame-by-frame fashion, stand-alone tracker cuts off the transmission of the uncertainty from the 3D detector to tracking while cannot pass tracking error differentials back to the 3D detector. In this work, we propose jointly training 3D detection and 3D tracking from only monocular videos in an end-to-end manner. The key component is a novel spatial-temporal information flow module that aggregates geometric and appearance features to predict robust similarity scores across all objects in current and past frames. Specifically, we leverage the attention mechanism of the transformer, in which self-attention aggregates the spatial information in a specific frame, and cross-attention exploits relation and affinities of all objects in the temporal domain of sequence frames. The affinities are then supervised to estimate the trajectory and guide the flow of information between corresponding 3D objects. In addition, we propose a temporal -consistency loss that explicitly involves 3D target motion modeling into the learning, making the 3D trajectory smooth in the world coordinate system. Time3D achieves 21.4% AMOTA, 13.6% AMOTP on the nuScenes 3D tracking benchmark, surpassing all published competitors, and running at 38 FPS, while Time3D achieves 31.2% mAP, 39.4% NDS on the nuScenes 3D detection benchmark.
引用
收藏
页码:3875 / 3884
页数:10
相关论文
共 50 条
  • [1] Monocular 3D Object Detection for Autonomous Driving
    Chen, Xiaozhi
    Kundu, Kaustav
    Zhang, Ziyu
    Ma, Huimin
    Fidler, Sanja
    Urtasun, Raquel
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2147 - 2156
  • [2] SparseDet: Towards End-to-End 3D Object Detection
    Han, Jianhong
    Wan, Zhaoyi
    Liu, Zhe
    Feng, Jie
    Zhou, Bingfeng
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, : 781 - 792
  • [3] An End-to-End Transformer Model for 3D Object Detection
    Misra, Ishan
    Girdhar, Rohit
    Joulin, Armand
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2886 - 2897
  • [4] A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous Vehicles
    Ahmed, Imran
    Jeon, Gwanggil
    Chehri, Abdellah
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (11) : 13078 - 13087
  • [5] End-to-end 3D Tracking with Decoupled Queries
    Li, Yanwei
    Yu, Zhiding
    Philion, Jonah
    Anandkumar, Anima
    Fidler, Sanja
    Jia, Jiaya
    Alvarez, Jose
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18256 - 18265
  • [6] End-to-End 3D Object Detection using LiDAR Point Cloud
    Raut, Gaurav
    Patole, Advait
    2024 IEEE 3RD INTERNATIONAL CONFERENCE ON COMPUTING AND MACHINE INTELLIGENCE, ICMI 2024, 2024,
  • [7] Efficient Uncertainty Estimation for Monocular 3D Object Detection in Autonomous Driving
    Liu, Zechen
    Han, Zhihua
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2711 - 2718
  • [8] Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving
    Chen, Yi-Nan
    Dai, Hang
    Ding, Yong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 877 - 887
  • [9] Monocular 3D Object Detection for Autonomous Driving Based on Contextual Transformer
    She, Xiangyang
    Yan, Weijia
    Dong, Lihong
    Computer Engineering and Applications, 2024, 60 (19) : 178 - 189
  • [10] Monocular 3D object detection using dual quadric for autonomous driving
    Li, Peixuan
    Zhao, Huaici
    NEUROCOMPUTING, 2021, 441 : 151 - 160