Looking Beyond Two Frames: End-to-End Multi-Object Tracking Using Spatial and Temporal Transformers

被引:21
|
作者
Zhu, Tianyu [1 ]
Hiller, Markus [2 ]
Ehsanpour, Mahsa [3 ]
Ma, Rongkai [1 ]
Drummond, Tom [2 ]
Reid, Ian
Rezatofighi, Hamid [4 ]
机构
[1] Monash Univ, Dept Elect & Comp Syst Engn, Clayton, Vic 3800, Australia
[2] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[3] Univ Adelaide, Australian Inst Machine Learning, Adelaide, SA 5005, Australia
[4] Monash Univ, Dept Data Sci & AI, Clayton, Vic 3800, Australia
关键词
Tracking; Transformers; Task analysis; Visualization; Object recognition; History; Feature extraction; Multi-object tracking; transformer; spatio-temporal model; pedestrian tracking; end-to-end learning; MULTITARGET;
D O I
10.1109/TPAMI.2022.3213073
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tracking a time-varying indefinite number of objects in a video sequence over time remains a challenge despite recent advances in the field. Most existing approaches are not able to properly handle multi-object tracking challenges such as occlusion, in part because they ignore long-term temporal information. To address these shortcomings, we present MO3TR: a truly end-to-end Transformer-based online multi-object tracking (MOT) framework that learns to handle occlusions, track initiation and termination without the need for an explicit data association module or any heuristics. MO3TR encodes object interactions into long-term temporal embeddings using a combination of spatial and temporal Transformers, and recursively uses the information jointly with the input data to estimate the states of all tracked objects over time. The spatial attention mechanism enables our framework to learn implicit representations between all the objects and the objects to the measurements, while the temporal attention mechanism focuses on specific parts of past information, allowing our approach to resolve occlusions over multiple frames. Our experiments demonstrate the potential of this new approach, achieving results on par with or better than the current state-of-the-art on multiple MOT metrics for several popular multi-object tracking benchmarks.
引用
收藏
页码:12783 / 12797
页数:15
相关论文
共 50 条
  • [21] An end to end trained hybrid CNN model for multi-object tracking
    Singh, Divya
    Srivastava, Rajeev
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42209 - 42221
  • [22] End-to-end Multi-Object Tracking Algorithm Integrating Global Local Feature Interaction and Angular Momentum Mechanism
    Ji, Zhongping
    Wang, Xiangwei
    He, Zhiwei
    Du, Chenjie
    Jin, Ran
    Chai, Bencheng
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (09): : 3703 - 3712
  • [23] End-to-end Flow Correlation Tracking with Spatial-temporal Attention
    Zhu, Zheng
    Wu, Wei
    Zou, Wei
    Yan, Junjie
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 548 - 557
  • [24] ADA-Track: End-to-End Multi-Camera 3D Multi-Object Tracking with Alternating Detection and Association
    Ding, Shuxiao
    Schneider, Lukas
    Cordts, Marius
    Gall, Juergen
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 15184 - 15194
  • [25] Multi-object Tracking with Spatial-Temporal Tracklet Association
    You, Sisi
    Yao, Hantao
    Bao, Bing-Kun
    Xu, Changsheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (05)
  • [26] Spatial-Temporal Relation Networks for Multi-Object Tracking
    Xu, Jiarui
    Cao, Yue
    Zhang, Zheng
    Hu, Han
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3987 - 3997
  • [27] Multi-Object Tracking with Grayscale Spatial-Temporal Features
    Xu, Longxiang
    Wu, Guosheng
    APPLIED SCIENCES-BASEL, 2024, 14 (13):
  • [28] Center-point-pair detection and context-aware re-identification for end-to-end multi-object tracking
    Zhang, Xin
    Ling, Yunan
    Yang, Yuanzhe
    Chu, Chengxiang
    Zhou, Zhong
    NEUROCOMPUTING, 2023, 524 : 17 - 30
  • [29] FLSTrack: focused linear attention swin-transformer network with dual-branch decoder for end-to-end multi-object tracking
    Zu, Dafu
    Duan, Xun
    Kong, Guangqian
    Long, Huiyun
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [30] End-to-End Learning Deep CRF Models for Multi-Object Tracking Deep CRF Models (vol 31, pg 275, 2021)
    Xiang, Jun
    Xu, Guohan
    Ma, Chao
    Hou, Jianhua
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) : 828 - 828