Embedded and real-time architecture for bio-inspired vision-based robot navigation

被引:0
|
作者
Laurent Fiack
Nicolas Cuperlier
Benoît Miramond
机构
[1] ETIS Lab UMR 8051 CNRS/ENSEA/UCP,
来源
关键词
Real-time vision; Feature detection; FPGA; Robot navigation; Sensation/action;
D O I
暂无
中图分类号
学科分类号
摘要
A recent trend in several robotics tasks is to consider vision as the primary sense to perceive the environment or to interact with humans. Therefore, vision processing becomes a central and challenging matter for the design of real-time control architectures. We follow in this paper a biological inspiration to propose a real-time and embedded control system relying on visual attention to learn specific actions in each place recognized by our robot. Faced with a performance challenge, the attentional model allows to reduce vision processing to a few regions of the visual field. However, the computational complexity of the visual chain remains an issue for a processing system embedded onto an indoor robot. That is why we propose as the first part of our system, a full-hardware architecture prototyped onto reconfigurable devices to detect salient features at the camera frequency. The second part learns continuously these features in order to implement specific robotics tasks. This neural control layer is implemented as embedded software making the robot fully autonomous from a computation point of view. The integration of such a system onto the robot enables not only to accelerate the frame rate of the visual processing, to relieve the control architecture but also to compress the data-flow at the output of the camera, thus reducing communication and energy consumption. We present in this paper the complete embedded sensorimotor architecture and the experimental setup. The presented results demonstrate its real-time behavior in vision-based navigation tasks.
引用
收藏
页码:699 / 722
页数:23
相关论文
共 50 条
  • [1] Embedded and real-time architecture for bio-inspired vision-based robot navigation
    Fiack, Laurent
    Cuperlier, Nicolas
    Miramond, Benoit
    [J]. JOURNAL OF REAL-TIME IMAGE PROCESSING, 2015, 10 (04) : 699 - 722
  • [2] Bio-Inspired Real-Time Robot Vision for Collision Avoidance
    Okuno, Hirotsugu
    Yagi, Tetsuya
    [J]. JOURNAL OF ROBOTICS AND MECHATRONICS, 2008, 20 (01) : 68 - 74
  • [3] A Bio-Inspired and Solely Vision-Based Model for Autonomous Navigation
    [J]. Sun, Xuelong (xsun@gzhu.edu.cn); Peng, Jigen (jgpeng@gzhu.edu.cn), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [4] A real-time vision-based outdoor navigation system for the wheelchair robot
    Qi, Xiaojun
    Ge, Yinbing
    [J]. PROCEEDINGS OF THE 12TH IASTED INTERNATIONAL CONFERENCE ON ROBOTICS AND APPLICATIONS, 2006, : 85 - +
  • [5] REAL-TIME VISION-BASED ROBOT LOCALIZATION
    ATIYA, S
    HAGER, GD
    [J]. IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1993, 9 (06): : 785 - 800
  • [6] Real-time vision-based relative aircraft navigation
    Georgia Institute of Technology, Atlanta, GA 30332-0150
    不详
    [J]. J. Aerosp. Comput. Inf. Commun., 2007, 4 (707-738):
  • [7] Bio-inspired heterogeneous architecture for real-time pedestrian detection applications
    Luca Maggiani
    Cédric Bourrasset
    Jean-Charles Quinton
    François Berry
    Jocelyn Sérot
    [J]. Journal of Real-Time Image Processing, 2018, 14 : 535 - 548
  • [8] Model-aided and vision-based navigation for an aerial robot in real-time application
    Alizadeh, M.
    Khoshnood, A. M.
    [J]. INTELLIGENT SERVICE ROBOTICS, 2024, 17 (04) : 731 - 744
  • [9] Bio-inspired heterogeneous architecture for real-time pedestrian detection applications
    Maggiani, Luca
    Bourrasset, Cedric
    Quinton, Jean-Charles
    Berry, Francois
    Serot, Jocelyn
    [J]. JOURNAL OF REAL-TIME IMAGE PROCESSING, 2018, 14 (03) : 535 - 548
  • [10] Real-time Vision-based UAV Navigation in Fruit Orchards
    Hulens, Dries
    Vandersteegen, Maarten
    Goedeme, Toon
    [J]. PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 4, 2017, : 617 - 622