SiamCPN: Visual tracking with the Siamese center-prediction network

被引:7
|
作者
Chen, Dong [1 ,2 ,4 ]
Tang, Fan [3 ]
Dong, Weiming [1 ,2 ,4 ]
Yao, Hanxing [4 ,5 ]
Xu, Changsheng [1 ,2 ,4 ]
机构
[1] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100040, Peoples R China
[2] Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China
[3] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[4] CASIA LLVISION Joint Lab, Beijing 100190, Peoples R China
[5] LLVISION Technol Co LTD, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Siamese network; single object tracking; anchor-free; center point detection; OBJECT TRACKING;
D O I
10.1007/s41095-021-0212-1
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Object detection is widely used in object tracking; anchor-free object tracking provides an end-to-end single-object-tracking approach. In this study, we propose a new anchor-free network, the Siamese center-prediction network (SiamCPN). Given the presence of referenced object features in the initial frame, we directly predict the center point and size of the object in subsequent frames in a Siamese-structure network without the need for perframe post-processing operations. Unlike other anchor-free tracking approaches that are based on semantic segmentation and achieve anchor-free tracking by pixel-level prediction, SiamCPN directly obtains all information required for tracking, greatly simplifying the model. A center-prediction sub-network is applied to multiple stages of the backbone to adaptively learn from the experience of different branches of the Siamese net. The model can accurately predict object location, implement appropriate corrections, and regress the size of the target bounding box. Compared to other leading Siamese networks, SiamCPN is simpler, faster, and more efficient as it uses fewer hyperparameters. Experiments demonstrate that our method outperforms other leading Siamese networks on GOT-10K and UAV123 benchmarks, and is comparable to other excellent trackers on LaSOT, VOT2016, and OTB-100 while improving inference speed 1.5 to 2 times.
引用
收藏
页码:253 / 265
页数:13
相关论文
共 50 条
  • [1] SiamCPN: Visual tracking with the Siamese center-prediction network
    Dong Chen
    Fan Tang
    Weiming Dong
    Hanxing Yao
    Changsheng Xu
    Computational Visual Media, 2021, 7 (02) : 253 - 265
  • [2] SiamCPN: Visual tracking with the Siamese center-prediction network
    Dong Chen
    Fan Tang
    Weiming Dong
    Hanxing Yao
    Changsheng Xu
    Computational Visual Media, 2021, 7 : 253 - 265
  • [3] Siamese refine polar mask prediction network for visual tracking
    Pu, Bin
    Xiang, Ke
    Liu, Ze'an
    Wang, Xuanyin
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (01) : 923 - 933
  • [4] Siamese refine polar mask prediction network for visual tracking
    Bin Pu
    Ke Xiang
    Ze’an Liu
    Xuanyin Wang
    Signal, Image and Video Processing, 2024, 18 : 923 - 933
  • [5] Siamese network ensemble for visual tracking
    Jiang, Chenru
    Xiao, Jimin
    Xie, Yanchun
    Tillo, Tammam
    Huang, Kaizhu
    NEUROCOMPUTING, 2018, 275 : 2892 - 2903
  • [6] Siamese Centerness Prediction Network for Real-Time Visual Object Tracking
    Wu, Yue
    Cai, Chengtao
    Yeo, Chai Kiat
    NEURAL PROCESSING LETTERS, 2023, 55 (02) : 1029 - 1044
  • [7] Siamese Centerness Prediction Network for Real-Time Visual Object Tracking
    Yue Wu
    Chengtao Cai
    Chai Kiat Yeo
    Neural Processing Letters, 2023, 55 : 1029 - 1044
  • [8] Combining Siamese Network and Regression Network for Visual Tracking
    Ge, Yao
    Chen, Rui
    Tong, Ying
    Cao, Xuehong
    Liang, Ruiyu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (08) : 1924 - 1927
  • [9] Siamese residual network for efficient visual tracking
    Fan, Nana
    Liu, Qiao
    Li, Xin
    Zhou, Zikun
    He, Zhenyu
    INFORMATION SCIENCES, 2023, 624 : 606 - 623
  • [10] Siamese Feedback Network for Visual Object Tracking
    Gwon M.-G.
    Kim J.
    Um G.-M.
    Lee H.
    Seo J.
    Lim S.Y.
    Yang S.-J.
    Kim W.
    IEIE Transactions on Smart Processing and Computing, 2022, 11 (01): : 24 - 33