Masked and dynamic Siamese network for robust visual tracking

被引:13
|
作者
Kuai, Yangliu [1 ]
Wen, Gongjian [1 ]
Li, Dongdong [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Sci, Natl Key Lab Sci & Technol ATR, Changsha, Hunan, Peoples R China
关键词
Siamese network; Semantic background; Target objectness model; Target template model; OBJECT TRACKING; CORRELATION FILTERS;
D O I
10.1016/j.ins.2019.07.004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual object tracking is as a critical function for many computer vision tasks such as motion analysis, event detection and action recognition. Recently, Siamese network based trackers gained enormous popularity in the tracking field due to their favorable accuracy and efficiency. However, the distraction problem caused by semantic backgrounds and the simple modeling strategy of target templates often lead to performance degradation. In this study, we propose two modules, namely the target objectness model and the target template model, based on existing Siamese network based trackers to solve these issues. The target objectness model computes the possibility of each pixel in the search area pertaining to the tracked target based on color distributions of the foreground and background areas. The computed target likelihood map is masked on the previous response map, and subsequently adjusts the final response map to focus on the target. This practice enlarges the discrimination between the tracked target and surrounding backgrounds, thus alleviating the distraction problem. The target template model proposes a Gaussian mixed model to encode target appearance variations, where each component of the model represents a different aspect of the target, and the component weights are learned and dynamically updated. The proposed Gaussian model enhances diversity and simultaneously reduces redundancy between target samples. To validate the effectiveness of our proposed method, we perform extensive experiments on four widely used benchmarks, namely OTB100, VOT2016, TC128, and UAV123. The experimental results demonstrate that our proposed algorithm achieves favorable performance compared to many state-of-the-art trackers while maintaining real-time tracking speed. (C) 2019 Elsevier Inc. All rights reserved.
引用
收藏
页码:169 / 182
页数:14
相关论文
共 50 条
  • [1] Hyper-Siamese network for robust visual tracking
    Yangliu Kuai
    Gongjian Wen
    Dongdong Li
    Signal, Image and Video Processing, 2019, 13 : 35 - 42
  • [2] Hyper-Siamese network for robust visual tracking
    Kuai, Yangliu
    Wen, Gongjian
    Li, Dongdong
    SIGNAL IMAGE AND VIDEO PROCESSING, 2019, 13 (01) : 35 - 42
  • [3] Learning Dynamic Siamese Network for Visual Object Tracking
    Guo, Qing
    Feng, Wei
    Zhou, Ce
    Huang, Rui
    Wan, Liang
    Wang, Song
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1781 - 1789
  • [4] Robust adaptive learning with Siamese network architecture for visual tracking
    Wancheng Zhang
    Yongzhao Du
    Zhi Chen
    Jianhua Deng
    Peizhong Liu
    The Visual Computer, 2021, 37 : 881 - 894
  • [5] Robust adaptive learning with Siamese network architecture for visual tracking
    Zhang, Wancheng
    Du, Yongzhao
    Chen, Zhi
    Deng, Jianhua
    Liu, Peizhong
    VISUAL COMPUTER, 2021, 37 (05): : 881 - 894
  • [6] Robust visual tracking algorithm with coattention guided Siamese network
    Dai, Jiahai
    Jiang, Jiaqi
    Wang, Songxin
    Chang, Yuchun
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (03)
  • [7] Robust Template Adjustment Siamese Network for Object Visual Tracking
    Tang, Chuanming
    Qin, Peng
    Zhang, Jianlin
    SENSORS, 2021, 21 (04) : 1 - 17
  • [8] SiamFDA: feature dynamic activation siamese network for visual tracking
    Gu, Jialiang
    She, Ying
    Yang, Yi
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [9] SiamFDA: feature dynamic activation siamese network for visual tracking
    Jialiang Gu
    Ying She
    Yi Yang
    Scientific Reports, 14
  • [10] Robust Visual Tracking Algorithm Based on Siamese Network with Dual Templates
    Hou Zhiqiang
    Chen Lilin
    Yu Wangsheng
    Ma Sugang
    Fan Jiulun
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2019, 41 (09) : 2247 - 2255