Adaptive cascaded and parallel feature fusion for visual object tracking

被引:0
|
作者
Wang, Jun [1 ]
Li, Sixuan [1 ]
Li, Kunlun [1 ]
Zhu, Qizhen [1 ]
机构
[1] Hebei Univ, 180 Wusi Rd, Baoding 071000, Hebei, Peoples R China
来源
VISUAL COMPUTER | 2024年 / 40卷 / 03期
关键词
Visual object tracking; Correlation filter; Feature fusion; ALW; Adaptive update;
D O I
10.1007/s00371-023-02908-9
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Due to its quick tracking, simple deployment, and straightforward principle, correlation filter-based tracking methods continue to have significant research implications. In order to make full use of different features while balancing the tracking speed and performance, the adaptive cascaded and parallel feature fusion-based tracker (ACPF), which could estimate the position, rotation and scale, respectively, is proposed. Comparing with other correlation filter-based trackers, the ACPF could fuse deep and handcrafted features in both Log-Polar and Cartesian branch and update templates according to the weights of response maps adaptively. Adaptive linear weights (ALW) are proposed to fuse feature response maps adaptively in the Log-Polar coordinates branch to improve the estimation of scale and rotation of object by solving constrained optimization problems. Response maps of shallow and deep features are merged adaptively by cascading numerous ALW modules in the Cartesian branch to make better utilize shallow and deep feature, and increase tracking accuracy. The final results are computed simultaneously by Cartesian and Log-Polar branch in parallel. Additionally, the learning rates are automatically changed in accordance with the weights of the ALW module to execute the adaptive template update. Extensive experiments on benchmarks show that the proposed tracker achieves the comparable results, especially in dealing with the challenges of deformation, rotation and scale variation.
引用
收藏
页码:2119 / 2138
页数:20
相关论文
共 50 条
  • [1] Adaptive cascaded and parallel feature fusion for visual object tracking
    Jun Wang
    Sixuan Li
    Kunlun Li
    Qizhen Zhu
    [J]. The Visual Computer, 2024, 40 : 2119 - 2138
  • [2] Adaptive feature fusion for visual object tracking
    Zhao, Shaochuan
    Xu, Tianyang
    Wu, Xiao-Jun
    Zhu, Xue-Feng
    [J]. PATTERN RECOGNITION, 2021, 111
  • [3] Visual Perception based Adaptive Feature Fusion for Visual Object Tracking
    Krieger, Evan
    Asari, Vijayan K.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1345 - 1350
  • [4] Visual Object Tracking based on Adaptive Multi-feature Fusion in Complex Scenarios
    Wang, Hengjun
    [J]. ELEVENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2019), 2019, 11179
  • [5] Combined feature evaluation for adaptive visual object tracking
    Han, Zhenjun
    Ye, Qixiang
    Jiao, Jianbin
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (01) : 69 - 80
  • [6] Visual Object Tracking via Cascaded RPN Fusion and Coordinate Attention
    Zhang, Jianming
    Wang, Kai
    He, Yaoqi
    Kuang, Lidan
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2022, 132 (03): : 909 - 927
  • [7] A hierarchical feature fusion framework for adaptive visual tracking
    Makris, Alexandros
    Kosmopoulos, Dimitrios
    Perantonis, Stavros
    Theodoridis, Sergios
    [J]. IMAGE AND VISION COMPUTING, 2011, 29 (09) : 594 - 606
  • [8] Adaptive Hyper-Feature Fusion for Visual Tracking
    Chen, Zhi
    Du, Yongzhao
    Deng, Jianhua
    Zhuang, Jiafu
    Liu, Peizhong
    [J]. IEEE ACCESS, 2020, 8 : 68711 - 68724
  • [9] CATrack: Convolution and Attention Feature Fusion for Visual Object Tracking
    Zhang, Longkun
    Wen, Jiajun
    Dai, Zichen
    Zhou, Rouyi
    Lai, Zhihui
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 469 - 480
  • [10] Adaptive Feature Fusion Object Tracking with Kernelized Correlation Filters
    Ge, Baoyi
    Zuo, Xianzhang
    Hu, Yongjiang
    [J]. 2018 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING (CSSE 2018), 2018, : 23 - 32