VGT-MOT: visibility-guided tracking for online multiple-object tracking

被引:0
|
作者
Shuai Wang
Wei-Xi Li
Lu Wang
Li-Sheng Xu
Qing-Xu Deng
机构
[1] Northeastern University,School of Computer Science and Engineering
[2] Northeastern University,College of Medicine and Biological Information Engineering
来源
关键词
Multi-object tracking; Joint detection and tracking; Adjacent-frame location prediction; Visibility-guided tracking;
D O I
暂无
中图分类号
学科分类号
摘要
Multi-object tracking (MOT) is an important task of computer vision which has a wide range of applications. Existing multi-object tracking methods mostly employ the Kalman filter to predict the object location in the next frame. However, if the video is captured by a camera with significant motion variation or contains objects moving at non-constant speed, the Kalman filter may fail. In addition, although object occlusion has been studied extensively in MOT, it has not been well addressed yet. To deal with these problems, a joint detection and tracking method named visibility-guided tracking for MOT (VGT-MOT) is proposed in this paper. Specifically, to cope with the difficulty of accurate object position estimation caused by drastic camera or object motion variation, VGT-MOT utilizes an adjacent-frame object location prediction network with inter-frame attention to predict the target position in the next frame. To handle object occlusion, VGT-MOT employs the object visibility as a dynamic weight to adaptively fuse the motion and appearance similarities and update the object appearance representation. The proposed VGT-MOT has been evaluated on the MOT16, MOT17 and MOT20 datasets. The results show that VGT-MOT compares favorably against state-of-the-art MOT approaches. The source code of the proposed method is available at https://github.com/wang-ironman/VGT-MOT.
引用
收藏
相关论文
共 50 条
  • [31] Multiple-object tracking is based on scene, not retinal, coordinates
    Liu, G
    Austen, EL
    Booth, KS
    Fisher, BD
    Argue, R
    Rempel, MI
    Enns, JT
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2005, 31 (02) : 235 - 247
  • [32] Evidence against a speed limit in multiple-object tracking
    Franconeri, S. L.
    Lin, J. Y.
    Pylyshyn, Z. W.
    Fisher, B.
    Enns, J. T.
    PSYCHONOMIC BULLETIN & REVIEW, 2008, 15 (04) : 802 - 808
  • [33] TransCenter: Transformers With Dense Representations for Multiple-Object Tracking
    Xu, Yihong
    Ban, Yutong
    Delorme, Guillaume
    Gan, Chuang
    Rus, Daniela
    Alameda-Pineda, Xavier
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7820 - 7835
  • [34] Visual multiple-object tracking for unknown clutter rate
    Kim, Du Yong
    IET COMPUTER VISION, 2018, 12 (05) : 728 - 734
  • [35] SVPTO: Safe Visibility-Guided Perception-Aware Trajectory Optimization for Aerial Tracking
    Wang, Hanzhang
    Zhang, Xuetao
    Liu, Yisha
    Zhang, Xuebo
    Zhuang, Yan
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (03) : 2716 - 2725
  • [36] Distracting Tracking: Interactions Between Negative Emotion and Attentional Load in Multiple-Object Tracking
    D'Andrea-Penna, Gina M.
    Frank, Sebastian M.
    Heatherton, Todd F.
    Tse, Peter U.
    EMOTION, 2017, 17 (06) : 900 - 904
  • [37] Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics
    Keni Bernardin
    Rainer Stiefelhagen
    EURASIP Journal on Image and Video Processing, 2008
  • [38] Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics
    Bernardin, Keni
    Stiefelhagen, Rainer
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2008, 2008 (1)
  • [39] Pedestrian multiple-object tracking based on FairMOT and circle loss
    Che, Jin
    He, Yuting
    Wu, Jinman
    SCIENTIFIC REPORTS, 2023, 13 (01):
  • [40] Dynamic environment modeling with gridmap: A multiple-object tracking application
    Chen, C.
    Tay, C.
    Laugier, C.
    Mekhnacha, Kamel
    2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 2006, : 2099 - +