Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

被引:64
|
作者
Ruesch, Jonas [1 ]
Lopes, Manuel [2 ]
Bernardino, Alexandre [2 ]
Hoernstein, Jonas [2 ]
Santos-Victor, Jose [2 ]
Pfeifer, Rolf [1 ]
机构
[1] Univ Zurich, Dept Informat, Artificial Intelligence Lab, CH-8006 Zurich, Switzerland
[2] Inst Super Tecn, Inst Syst & Robot, Lisbon, Portugal
关键词
D O I
10.1109/ROBOT.2008.4543329
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robot's decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the With with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space.
引用
收藏
页码:962 / +
页数:2
相关论文
共 50 条
  • [21] Saliency-Based Spatiotemporal Attention for Video Captioning
    Chen, Yangyu
    Zhang, Weigang
    Wang, Shuhui
    Li, Liang
    Huang, Qingming
    2018 IEEE FOURTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM), 2018,
  • [22] Analyzing bottom-up saliency in natural movies
    Vig, E.
    Dorr, M.
    Barth, E.
    PERCEPTION, 2010, 39 : 35 - 35
  • [23] RARE: A NEW BOTTOM-UP SALIENCY MODEL
    Riche, Nicolas
    Mancas, Matei
    Gosselin, Bernard
    Dutoit, Thierry
    2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 641 - 644
  • [24] Exploring sex differences in auditory saliency: the role of acoustic characteristics in bottom-up attention
    Naoya Obama
    Yoshiki Sato
    Narihiro Kodama
    Yuhei Kodani
    Katsuya Nakamura
    Ayaka Yokozeki
    Shinsuke Nagami
    BMC Neuroscience, 25 (1)
  • [25] A Multimode Teleoperation Framework for Humanoid Loco-Manipulation: An Application for the iCub Robot
    Penco, Luigi
    Scianca, Nicola
    Modugno, Valerio
    Lanari, Leonardo
    Oriolo, Giuseppe
    Ivaldi, Serena
    IEEE ROBOTICS & AUTOMATION MAGAZINE, 2019, 26 (04) : 73 - 82
  • [26] Bottom-Up Visual Attention Model based on FPGA
    Barranco, Francisco
    Diaz, Javier
    Pino, Begona
    Ros, Eduardo
    2012 19th IEEE International Conference on Electronics, Circuits and Systems (ICECS), 2012, : 328 - 331
  • [27] Contextual texture based bottom-up visual attention
    Lang Congyan
    Xu De
    Li Ning
    ICSP: 2008 9TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, VOLS 1-5, PROCEEDINGS, 2008, : 942 - 945
  • [28] Bottom-Up Saliency Estimation Based on Redundancy Reduction and Global Contrast
    缪小冬
    李舜酩
    沈峘
    李爱婷
    Transactions of Nanjing University of Aeronautics and Astronautics, 2014, 31 (06) : 660 - 667
  • [29] Ray Saliency: Bottom-Up Visual Saliency for a Rotating and Zooming Camera
    Warnell, Garrett
    David, Philip
    Chellappa, Rama
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2016, 116 (02) : 174 - 189
  • [30] Ray Saliency: Bottom-Up Visual Saliency for a Rotating and Zooming Camera
    Garrett Warnell
    Philip David
    Rama Chellappa
    International Journal of Computer Vision, 2016, 116 : 174 - 189