Robust visual multitask tracking via composite sparse model

被引:2
|
作者
Jin, Bo [1 ]
Jing, Zhongliang [1 ]
Wang, Meng [2 ]
Pan, Han [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Aeronaut & Astronaut, Shanghai 200240, Peoples R China
[2] Chinese Acad Sci, Shanghai Inst Tech Phys, Shanghai 200083, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
visual tracking; sparse representation; multitask learning; dirty model; alternating direction method of multipliers; OBJECT TRACKING;
D O I
10.1117/1.JEI.23.6.063022
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L-1,L-q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L-1,L-infinity and L-1,L-1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers. (C) 2014 SPIE and IS&T
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Robust Visual Tracking via Incremental Subspace Learning and Local Sparse Representation
    Yang, Guoliang
    Hu, Zhengwei
    Tang, Jun
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2018, 43 (02) : 627 - 636
  • [32] Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
    Tianzhu Zhang
    Si Liu
    Narendra Ahuja
    Ming-Hsuan Yang
    Bernard Ghanem
    International Journal of Computer Vision, 2015, 111 : 171 - 190
  • [33] Robust Visual Tracking via Patch Descriptor and Structural Local Sparse Representation
    Song, Zhiguo
    Sun, Jifeng
    Yu, Jialin
    Liu, Shengqing
    ALGORITHMS, 2018, 11 (08):
  • [34] Robust Visual Tracking via Sparse Representation Under Subclass Discriminant Constraint
    Qian, Cheng
    Xu, Zezhong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2016, 26 (07) : 1293 - 1307
  • [35] Robust Visual Tracking via Structured Multi-Task Sparse Learning
    Tianzhu Zhang
    Bernard Ghanem
    Si Liu
    Narendra Ahuja
    International Journal of Computer Vision, 2013, 101 : 367 - 383
  • [36] Robust visual tracking via two-stage binocular sparse learning
    Ma, Ziang
    Lu, Wei
    Yin, Jun
    Zhang, Xingming
    JOURNAL OF ENGINEERING-JOE, 2018, (16): : 1606 - 1611
  • [37] Robust Visual Tracking via Incremental Subspace Learning and Local Sparse Representation
    Guoliang Yang
    Zhengwei Hu
    Jun Tang
    Arabian Journal for Science and Engineering, 2018, 43 : 627 - 636
  • [38] Robust Visual Tracking via Sparse Feature Selection and Weight Dictionary Update
    Zheng, Penggen
    Zhan, Jin
    Zhao, Huimin
    Wu, Hefeng
    ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, BICS 2018, 2018, 10989 : 484 - 494
  • [39] Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
    Zhang, Tianzhu
    Liu, Si
    Ahuja, Narendra
    Yang, Ming-Hsuan
    Ghanem, Bernard
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 111 (02) : 171 - 190
  • [40] Sparse Coding and Counting for Robust Visual Tracking
    Liu, Risheng
    Wang, Jing
    Shang, Xiaoke
    Wang, Yiyang
    Su, Zhixun
    Cai, Yu
    PLOS ONE, 2016, 11 (12):