Siamese Visual Tracking with Robust Adaptive Learning

被引:0
|
作者
Zhang, Wancheng [1 ]
Chen, Zhi [1 ]
Liu, Peizhong [2 ,3 ]
Deng, Jianhua [2 ]
机构
[1] Huaqiao Univ, Coll Engn, Quanzhou 362000, Peoples R China
[2] Quanzhou Zhongfang Hongye Informat Technol Co LTD, Quanzhou 362000, Peoples R China
[3] Fujian Prov Big Data Res Inst Intelligent Mfg, Quanzhou 362000, Peoples R China
关键词
visual tracking; siamese network; daptive feature fusion; model update;
D O I
10.1109/icasid.2019.8925141
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Correlation filters and deep learning methods are the two mainly directions in the research of visual tracking. However, these trackers do not balance accuracy and speed very well at the same time. The application of the Siamese networks brings great improvement in accuracy and speed, and an increasing number of researchers are paying attention to this aspect. In the paper, based on the Siamese networks model, we propose a robust adaptive learning visual tracking algorithm. HOG features, CN features and deep convolution features are extracted from the template frame and search region frame respectively, and we analyze the merits of each feature and perform feature adaptive fusion to improve the validity of feature representation. Then, we update the two branch models with two learning change factors and realize a more similar match to locate the target. Besides, we propose a model update strategy that employs the average peak-to-correlation energy (APCE) to determinate whether to update the learning change factors to improve the accuracy of tracking model and reduce the tracking drift in the case of tracking failure, deformation or background blur etc. Extensive experiments on the benchmark datasets (OTB-50, OTB-100) demonstrate that our visual tracking algorithm performs better than several state-of-the-art trackers for accuracy and robustness.
引用
收藏
页码:153 / 157
页数:5
相关论文
共 50 条
  • [21] Adaptive Weight Collaborative Complementary Learning for Robust Visual Tracking
    Wang, Benxuan
    Kong, Jun
    Jiang, Min
    Shen, Jianyu
    Liu, Tianshan
    Gu, Xiaofeng
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2019, 13 (01): : 305 - 326
  • [22] Adaptive Feature Selection Siamese Networks for Visual Tracking
    Fiaz, Mustansar
    Rahman, Md Maklachur
    Mahmood, Arif
    Farooq, Sehar Shahzad
    Baek, Ki Yeol
    Jung, Soon Ki
    FRONTIERS OF COMPUTER VISION, 2020, 1212 : 167 - 179
  • [23] Siamese Attention Networks with Adaptive Templates for Visual Tracking
    Zhang, Bo
    Liang, Zhixue
    Dong, Wenyong
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [24] Learning by tracking: Siamese CNN for robust target association
    Leal-Taixe, Laura
    Canton-Ferrer, Cristian
    Schindler, Konrad
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 418 - 425
  • [25] Learning to Filter: Siamese Relation Network for Robust Tracking
    Cheng, Siyuan
    Zhong, Bineng
    Li, Guorong
    Liu, Xin
    Tang, Zhenjun
    Li, Xianxian
    Wang, Jing
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 4419 - 4429
  • [26] Siamese Graph Attention Networks for robust visual object tracking
    Lu, Junjie
    Li, Shengyang
    Guo, Weilong
    Zhao, Manqi
    Yang, Jian
    Liu, Yunfei
    Zhou, Zhuang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 229
  • [27] Siamese Visual Tracking With Deep Features and Robust Feature Fusion
    Li, Daqun
    Wang, Xize
    Yu, Yi
    IEEE ACCESS, 2020, 8 : 3863 - 3874
  • [28] Robust visual tracking algorithm with coattention guided Siamese network
    Dai, Jiahai
    Jiang, Jiaqi
    Wang, Songxin
    Chang, Yuchun
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (03)
  • [29] SiamPAT: Siamese point attention networks for robust visual tracking
    Chen, Hang
    Zhang, Weiguo
    Yan, Danghui
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (05)
  • [30] Robust Template Adjustment Siamese Network for Object Visual Tracking
    Tang, Chuanming
    Qin, Peng
    Zhang, Jianlin
    SENSORS, 2021, 21 (04) : 1 - 17