Online learning 3D context for robust visual tracking

被引:11
|
作者
Zhong, Bineng [1 ]
Shen, Yingju [1 ]
Chen, Yan [1 ]
Xie, Weibo [1 ]
Cui, Zhen [1 ]
Zhang, Hongbo [1 ]
Chen, Duansheng [1 ]
Wang, Tian [1 ]
Liu, Xin [1 ]
Peng, Shujuan [1 ]
Gou, Jin [1 ]
Du, Jixiang [1 ]
Wang, Jing [1 ]
Zheng, Wenming [1 ,2 ]
机构
[1] Huaqiao Univ, Dept Comp Sci & Technol, Xiamen 361021, Fujian, Peoples R China
[2] Southeast Univ, Nanjing, Jiangsu, Peoples R China
关键词
Visual tracking; 3D context; Depth information;
D O I
10.1016/j.neucom.2014.06.083
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study the challenging problem of tracking single object in a complex dynamic scene. In contrast to most existing trackers which only exploit 2D color or gray images to learn the appearance model of the tracked object online, we take a different approach, inspired by the increased popularity of depth sensors, by putting more emphasis on the 3D Context to prevent model drift and handle occlusion. Specifically, we propose a 3D context-based object tracking method that learns a set of 3D context key-points, which have spatial-temporal co-occurrence correlations with the tracked object, for collaborative tracking in binocular video data. We first learn 3D context key-points via the spatial-temporal constrain in their spatial and depth coordinates. Then, the position of the object of interest is determined by a probability voting from the learnt 3D context key-points. Moreover, with depth information, a simple yet effective occlusion handling scheme is proposed to detect occlusion and recovery. Qualitative and quantitative experimental results on challenging video sequences demonstrate the robustness of the proposed method. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:710 / 718
页数:9
相关论文
共 50 条
  • [1] Robust 3D Head Tracking by Online Feature Registration
    Jang, Jun-Su
    Kanade, Takeo
    2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 261 - 266
  • [2] Accurate quadrifocal tracking for robust 3D visual odometry
    Comport, A. I.
    Malis, E.
    Rives, P.
    PROCEEDINGS OF THE 2007 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-10, 2007, : 40 - 45
  • [3] Discriminative and Robust Online Learning for Siamese Visual Tracking
    Zhou, Jinghao
    Wang, Peng
    Sun, Haoyang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13017 - 13024
  • [4] Adaptive Online Learning Based Robust Visual Tracking
    Yang, Weiming
    Zhao, Meirong
    Huang, Yinguo
    Zheng, Yelong
    IEEE ACCESS, 2018, 6 : 14790 - 14798
  • [5] Online appearance learning for 3D articulated human tracking
    Roberts, TJ
    McKenna, SJ
    Ricketts, IW
    16TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL I, PROCEEDINGS, 2002, : 425 - 428
  • [6] Robust visual tracking based on online learning sparse representation
    Zhang, Shengping
    Yao, Hongxun
    Zhou, Huiyu
    Sun, Xin
    Liu, Shaohui
    NEUROCOMPUTING, 2013, 100 : 31 - 40
  • [7] Robust 3D Visual Tracking Using Particle Filtering on the SE(3) Group
    Choi, Changhyun
    Christensen, Henrik I.
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011,
  • [8] Self-Supervised Online Learning of Appearance for 3D Tracking
    Lee, Bhoram
    Lee, Daniel D.
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 4930 - 4937
  • [9] Robust 3D Visual Tracking for Robotic-Assisted Cardiac Interventions
    Richa, Rogerio
    Bo, Antonio P. L.
    Poignet, Philippe
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2010, PT I, 2010, 6361 : 267 - 274
  • [10] Robust Online Learned Spatio-Temporal Context Model for Visual Tracking
    Wen, Longyin
    Cai, Zhaowei
    Lei, Zhen
    Yi, Dong
    Li, Stan Z.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (02) : 785 - 796