A visual attention model for robot object tracking

被引:5
|
作者
Chu J.-K. [1 ]
Li R.-H. [1 ]
Li Q.-Y. [1 ,2 ]
Wang H.-Q. [1 ]
机构
[1] School of Mechanical Engineering, Dalian University of Technology
基金
中国国家自然科学基金;
关键词
Object tracking; Salient regions; Topological perception; Visual attention; Weighted similarity equation;
D O I
10.1007/s11633-010-0039-1
中图分类号
学科分类号
摘要
Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752 × 480 pixels takes less than 200ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency. © 2010 Institute of Automation, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg.
引用
收藏
页码:39 / 46
页数:7
相关论文
共 50 条
  • [21] Spatial spread of visual attention while tracking a moving object
    Matsubara, Kazuya
    Shioiri, Satoshi
    Yaguchi, Hirohisa
    OPTICAL REVIEW, 2007, 14 (01) : 57 - 63
  • [22] Siamese Graph Attention Networks for robust visual object tracking
    Lu, Junjie
    Li, Shengyang
    Guo, Weilong
    Zhao, Manqi
    Yang, Jian
    Liu, Yunfei
    Zhou, Zhuang
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 229
  • [23] CATrack: Convolution and Attention Feature Fusion for Visual Object Tracking
    Zhang, Longkun
    Wen, Jiajun
    Dai, Zichen
    Zhou, Rouyi
    Lai, Zhihui
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 469 - 480
  • [24] Spatial spread of visual attention while tracking a moving object
    Ishii, Kei
    Matumiya, Kazumichi
    Kuriki, Ichiro
    Shioiri, Satoshi
    I-PERCEPTION, 2014, 5 (04): : 397 - 397
  • [25] Evota: an enhanced visual object tracking network with attention mechanism
    Zhao, An
    Zhang, Yi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) : 24939 - 24960
  • [26] MASNet: mixed attention Siamese network for visual object tracking
    Zhang, Jianwei
    Zhang, Zhichen
    Zhang, Huanlong
    Wang, Jingchao
    Wang, He
    Zheng, Menya
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2024, 12 (01)
  • [27] Transformer visual object tracking algorithm based on mixed attention
    Hou Z.-Q.
    Guo F.
    Yang X.-L.
    Ma S.-G.
    Fan J.-L.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (03): : 739 - 748
  • [28] Visual servoing based positioning and object tracking on humanoid robot
    Bombile, Michael
    Lecture Notes in Electrical Engineering, 2015, 312 : 19 - 27
  • [29] Visual tracking of a moving object of a robot head with 3 DOF
    Bie, HG
    Huang, Q
    Zhang, WM
    Song, B
    Li, KJ
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS, INTELLIGENT SYSTEMS AND SIGNAL PROCESSING, VOLS 1 AND 2, PROCEEDINGS, 2003, : 686 - 691
  • [30] Visual Localization and Object Tracking for the NAO Robot in Dynamic Environment
    Li, Chao
    Wang, Xin
    2016 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA), 2016, : 1044 - 1049