Joint Representation and Truncated Inference Learning for Correlation Filter Based Tracking

被引:25
|
作者
Yao, Yingjie [1 ]
Wu, Xiaohe [1 ]
Zhang, Lei [2 ]
Shan, Shiguang [3 ]
Zuo, Wangmeng [1 ]
机构
[1] Harbin Inst Technol, Harbin 150001, Peoples R China
[2] Univ Pittsburgh, 3362 Fifth Ave, Pittsburgh, PA 15213 USA
[3] Chinese Acad Sci, Inst Comp Technol, Beijing 100049, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Visual tracking; Correlation filters; Convolutional neural networks; Unrolled optimization;
D O I
10.1007/978-3-030-01240-3_34
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Correlation filter (CF) based trackers generally include two modules, i.e., feature representation and on-line model adaptation. In existing off-line deep learning models for CF trackers, the model adaptation usually is either abandoned or has closed-form solution to make it feasible to learn deep representation in an end-to-end manner. However, such solutions fail to exploit the advances in CF models, and cannot achieve competitive accuracy in comparison with the state-of-the-art CF trackers. In this paper, we investigate the joint learning of deep representation and model adaptation, where an updater network is introduced for better tracking on future frame by taking current frame representation, tracking result, and last CF tracker as input. By modeling the representor as convolutional neural network (CNN), we truncate the alternating direction method of multipliers (ADMM) and interpret it as a deep network of updater, resulting in our model for learning representation and truncated inference (RTINet). Experiments demonstrate that our RTINet tracker achieves favorable tracking accuracy against the state-of-the-art trackers and its rapid version can run at a real-time speed of 24 fps. The code and pre-trained models will be publicly available at https://github.com/tourmaline612/RTINet.
引用
收藏
页码:560 / 575
页数:16
相关论文
共 50 条
  • [31] A Meta-Q-Learning Approach to Discriminative Correlation Filter based Visual Tracking
    Kubo, Akihiro
    Meshgi, Kourosh
    Ishii, Shin
    Journal of Intelligent and Robotic Systems: Theory and Applications, 2021, 101 (01):
  • [32] A Meta-Q-Learning Approach to Discriminative Correlation Filter based Visual Tracking
    Kubo, Akihiro
    Meshgi, Kourosh
    Ishii, Shin
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 101 (01)
  • [33] Joint Recovery and Representation Learning for Robust Correlation Estimation based on Partially Observed Data
    Wang, Shupeng
    Zhang, Xiao-Yu
    Yun, Xiaochun
    Wu, Guangjun
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOP (ICDMW), 2015, : 925 - 931
  • [34] Truncated Particle Filter Based on QMC Sampling and Application to Target Tracking
    San Ye
    Ma Cheng
    Zhu Yi
    PROCEEDINGS OF 2013 IEEE 11TH INTERNATIONAL CONFERENCE ON ELECTRONIC MEASUREMENT & INSTRUMENTS (ICEMI), 2013, : 919 - 922
  • [35] Transductive inference for color-based particle filter tracking
    Li, J
    Chua, CS
    2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL 3, PROCEEDINGS, 2003, : 949 - 952
  • [36] Weighted Joint Sparse Representation Based Visual Tracking
    Duan, Xiping
    Liu, Jiafeng
    Tang, Xianglong
    NEURAL INFORMATION PROCESSING, PT III, 2015, 9491 : 600 - 609
  • [37] Online Learning of Discriminative Correlation Filter Bank for Visual Tracking
    Wei, Jian
    Liu, Feng
    INFORMATION, 2018, 9 (03)
  • [38] Learning large margin support correlation filter for visual tracking
    Qian, Cheng
    Cai, Xiaoli
    Zhu, Junjie
    Xu, Ye
    Tang, Zhijun
    Li, Chunguang
    JOURNAL OF ELECTRONIC IMAGING, 2019, 28 (03)
  • [39] Learning spatio-temporal correlation filter for visual tracking
    Yan, Youmin
    Guo, Xixian
    Tang, Jin
    Li, Chenglong
    Wang, Xin
    NEUROCOMPUTING, 2021, 436 : 273 - 282
  • [40] Correlation Filter Selection for Visual Tracking Using Reinforcement Learning
    Xie, Yanchun
    Xiao, Jimin
    Huang, Kaizhu
    Thiyagalingam, Jeyarajan
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (01) : 192 - 204