Robust visual multitask tracking via composite sparse model

被引:2
|
作者
Jin, Bo [1 ]
Jing, Zhongliang [1 ]
Wang, Meng [2 ]
Pan, Han [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai 200240, Peoples R China
[2] Chinese Acad Sci, Shanghai Inst Tech Phys, Shanghai 200083, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
visual tracking; sparse representation; multitask learning; dirty model; alternating direction method of multipliers; OBJECT TRACKING;
D O I
10.1117/1.JEI.23.6.063022
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L-1,L-q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L-1,L-infinity and L-1,L-1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers. (C) 2014 SPIE and IS&T
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Visual tracking via robust multitask sparse prototypes
    Zhang, Huanlong
    Hu, Shiqiang
    Yu, Junyang
    JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (02)
  • [2] Robust Visual Tracking via Multitask Sparse Correlation Filters Learning
    Nai, Ke
    Li, Zhiyong
    Gan, Yihui
    Wang, Qi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (01) : 502 - 515
  • [3] Object Tracking via Robust Multitask Sparse Representation
    Bai, Yancheng
    Tang, Ming
    IEEE SIGNAL PROCESSING LETTERS, 2014, 21 (08) : 909 - 913
  • [4] Robust visual tracking of infrared object via sparse representation model
    Ma, Junkai
    Luo, Haibo
    Chang, Zheng
    Hui, Bin
    INTERNATIONAL SYMPOSIUM ON OPTOELECTRONIC TECHNOLOGY AND APPLICATION 2014: IMAGE PROCESSING AND PATTERN RECOGNITION, 2014, 9301
  • [5] Robust visual tracking via discriminative appearance model based on sparse coding
    Hainan Zhao
    Xuan Wang
    Multimedia Systems, 2017, 23 : 75 - 84
  • [6] Robust visual tracking via CAMShift and structural local sparse appearance model
    Zhao, Houqiang
    Xiang, Ke
    Cao, Songxiao
    Wang, Xuanyin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 34 : 176 - 186
  • [7] Robust visual tracking via discriminative appearance model based on sparse coding
    Zhao, Hainan
    Wang, Xuan
    MULTIMEDIA SYSTEMS, 2017, 23 (01) : 75 - 84
  • [8] Robust Visual Tracking and Vehicle Classification via Sparse Representation
    Mei, Xue
    Ling, Haibin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (11) : 2259 - 2272
  • [9] Robust Visual Tracking via Discriminative Structural Sparse Feature
    Wang, Fenglei
    Zhang, Jun
    Guo, Qiang
    Liu, Pan
    Tu, Dan
    ADVANCES IN IMAGE AND GRAPHICS TECHNOLOGIES (IGTA 2015), 2015, 525 : 438 - 446
  • [10] Robust Visual Tracking via Binocular Consistent Sparse Learning
    Ma, Ziang
    Xiang, Zhiyu
    NEURAL PROCESSING LETTERS, 2017, 46 (02) : 627 - 642