Distractor-Aware Siamese Networks for Visual Object Tracking

被引:492
|
作者
Zhu, Zheng [1 ,2 ]
Wang, Qiang [1 ,2 ]
Li, Bo [3 ]
Wu, Wei [3 ]
Yan, Junjie [3 ]
Hu, Weiming [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[3] SenseTime Grp Ltd, Beijing, Peoples R China
来源
关键词
Visual tracking; Distractor-aware; Siamese networks;
D O I
10.1007/978-3-030-01240-3_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, Siamese networks have drawn great attention in visual tracking community because of their balanced accuracy and speed. However, features used in most Siamese tracking approaches can only discriminate foreground from the non-semantic backgrounds. The semantic backgrounds are always considered as distractors, which hinders the robustness of Siamese trackers. In this paper, we focus on learning distractor-aware Siamese networks for accurate and long-term tracking. To this end, features used in traditional Siamese trackers are analyzed at first. We observe that the imbalanced distribution of training data makes the learned features less discriminative. During the off-line training phase, an effective sampling strategy is introduced to control this distribution and make the model focus on the semantic distractors. During inference, a novel distractor-aware module is designed to perform incremental learning, which can effectively transfer the general embedding to the current video domain. In addition, we extend the proposed approach for long-term tracking by introducing a simple yet effective local-to-global search region strategy. Extensive experiments on benchmarks show that our approach significantly outperforms the state-of-the-arts, yielding 9.6% relative gain in VOT2016 dataset and 35.9% relative gain in UAV20L dataset. The proposed tracker can perform at 160 FPS on short-term benchmarks and 110 FPS on long-term benchmarks.
引用
收藏
页码:103 / 119
页数:17
相关论文
共 50 条
  • [1] Distractor-Aware Visual Tracking by Online Siamese Network
    Zha, Yufei
    Wu, Min
    Qiu, Zhuling
    Dong, Shuangyu
    Yang, Fei
    Zhang, Peng
    [J]. IEEE ACCESS, 2019, 7 : 89777 - 89788
  • [2] Siamese Neural Network Object Tracking with Distractor-Aware Model
    Li Yong
    Yang Dedong
    Han Yajun
    Song Peng
    [J]. ACTA OPTICA SINICA, 2020, 40 (04)
  • [3] Object discriminability re-extraction for distractor-aware visual object tracking
    Cui, Ying
    Cheng, Qiang
    Guo, Dongyan
    Kong, Xiangjie
    Wang, Zhenhua
    Zhang, Jianhua
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 247
  • [4] Distractor-Aware Deep Regression for Visual Tracking
    Du, Ming
    Ding, Yan
    Meng, Xiuyun
    Wei, Hua-Liang
    Zhao, Yifan
    [J]. SENSORS, 2019, 19 (02)
  • [5] Learning a unified tracking-and-detection framework with distractor-aware constraint for visual object tracking
    Fang, Yang
    Ko, Seunghyun
    Jo, Geun-Sik
    [J]. JOURNAL OF ENGINEERING-JOE, 2020, 2020 (13): : 679 - 685
  • [6] Adaptive distractor-aware for siamese tracking via enhancement confidence evaluator
    Zhang, Huanlong
    Zhu, Linwei
    Wu, Huaiguang
    Zhao, Yanchun
    Lin, Yingzi
    Zhang, Jianwei
    [J]. APPLIED INTELLIGENCE, 2023, 53 (23) : 29223 - 29241
  • [7] When correlation filters meet fully-convolutional Siamese networks for distractor-aware tracking
    Kuai, Yangliu
    Wen, Gongjian
    Li, Dongdong
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 64 : 107 - 117
  • [8] Adaptive distractor-aware for siamese tracking via enhancement confidence evaluator
    Huanlong Zhang
    Linwei Zhu
    Huaiguang Wu
    Yanchun Zhao
    Yingzi Lin
    Jianwei Zhang
    [J]. Applied Intelligence, 2023, 53 : 29223 - 29241
  • [9] Distractor-aware discrimination learning for online multiple object tracking
    Zhou, Zongwei
    Luo, Wenhan
    Wang, Qiang
    Xing, Junliang
    Hu, Weiming
    [J]. PATTERN RECOGNITION, 2020, 107
  • [10] A New Dataset and a Distractor-Aware Architecture for Transparent Object Tracking
    Lukezic, Alan
    Trojer, Ziga
    Matas, Jiri
    Kristan, Matej
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (08) : 2729 - 2742