Robust Scale Adaptive Visual Tracking with Correlation Filters

被引:1
|
作者
Li, Chunbao [1 ]
Yang, Bo [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Sichuan, Peoples R China
[2] Sichuan Elect Informat Ind Technol Res Inst Co Lt, Chengdu 610000, Sichuan, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2018年 / 8卷 / 11期
关键词
computer vision; visual tracking; correlation filter; scale variation; occlusion; high-quality candidate object proposals; OBJECT TRACKING; FUSION;
D O I
10.3390/app8112037
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Visual tracking is a challenging task in computer vision due to various appearance changes of the target object. In recent years, correlation filter plays an important role in visual tracking and many state-of-the-art correlation filter based trackers are proposed in the literature. However, these trackers still have certain limitations. Most of existing trackers cannot well deal with scale variation, and they may easily drift to the background in the case of occlusion. To overcome the above problems, we propose a Correlation Filters based Scale Adaptive (CFSA) visual tracker. In the tracker, a modified EdgeBoxes generator, is proposed to generate high-quality candidate object proposals for tracking. The pool of generated candidate object proposals is adopted to estimate the position of the target object using a kernelized correlation filter based tracker with HOG and color naming features. In order to deal with changes in target scale, a scale estimation method is proposed by combining the water flow driven MBD (minimum barrier distance) algorithm with the estimated position. Furthermore, an online updating schema is adopted to reduce the interference of the surrounding background. Experimental results on two large benchmark datasets demonstrate that the CFSA tracker achieves favorable performance compared with the state-of-the-art trackers.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Robust Visual Tracking via Adaptive Kernelized Correlation Filter
    Wang, Bo
    Wang, Desheng
    Liao, Qingmin
    FOURTH INTERNATIONAL CONFERENCE ON WIRELESS AND OPTICAL COMMUNICATIONS, 2016, 9902
  • [42] Adaptive Multiple Features Spatially Regularized Correlation Filters for Visual Tracking
    Li, Shanbin
    Wang, Jiajia
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 3116 - 3121
  • [43] Long-term visual tracking based on adaptive correlation filters
    Wang, Zhongmin
    Zhang, Futao
    Chen, Yanping
    Ma, Sugang
    JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (05)
  • [44] Kernel correlation filters for visual tracking with adaptive fusion of heterogeneous cues
    Bai, Bing
    Zhong, Bineng
    Ouyang, Gu
    Wang, Pengfei
    Liu, Xin
    Chen, Ziyi
    Wang, Cheng
    NEUROCOMPUTING, 2018, 286 : 109 - 120
  • [45] Visual Tracking via Adaptive Spatially-Regularized Correlation Filters
    Dai, Kenan
    Wang, Dong
    Lu, Huchuan
    Sun, Chong
    Li, Jianhua
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4665 - 4674
  • [46] Long-term Scale Adaptive Tracking with Kernel Correlation Filters
    Wang, Yueren
    Zhang, Hong
    Zhang, Lei
    Yang, Yifan
    Sun, Mingui
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [47] Robust Visual Correlation Tracking
    Zhang, Lei
    Wang, Yanjie
    Sun, Honghai
    Yao, Zhijun
    He, Shuwen
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2015, 2015
  • [48] Robust Visual Tracking via Dirac-Weighted Cascading Correlation Filters
    Peng, Cheng
    Liu, Fanghui
    Yang, Jie
    Kasabov, Nikola
    IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (11) : 1700 - 1704
  • [49] Robust Visual Tracking via Constrained Multi-Kernel Correlation Filters
    Huang, Bo
    Xu, Tingfa
    Jiang, Shenwang
    Chen, Yiwen
    Bai, Yu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (11) : 2820 - 2832
  • [50] Robust visual tracking via co-trained Kernelized correlation filters
    Zhang, Le
    Suganthan, Ponnuthurai Nagaratnam
    PATTERN RECOGNITION, 2017, 69 : 82 - 93