A multi-modal object attention system for a mobile robot

被引:20
|
作者
Haasch, A [1 ]
Hofemann, N [1 ]
Fritsch, J [1 ]
Sagerer, G [1 ]
机构
[1] Univ Bielefeld, Fac Technol, D-33594 Bielefeld, Germany
关键词
object attention; human-robot interaction; robot companion;
D O I
10.1109/IROS.2005.1545191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robot companions are intended for operation in private homes with naive users. For this purpose, they need to be endowed with natural interaction capabilities. Additionally, such robots will need to be taught unknown objects that are present in private homes. We present a multi-modal object attention system that is able to identify objects referenced by the user with gestures and verbal instructions. The proposed system can detect known and unknown objects and stores newly acquired object information in a scene model for later retrieval. This way, the growing knowledge base of the robot companion improves the interaction quality as the robot can more easily focus its attention on objects it has been taught previously.
引用
收藏
页码:1499 / 1504
页数:6
相关论文
共 50 条
  • [31] Development of an active robot system for multi-modal paranasal sinus surgery
    Wurm, J
    Bumm, K
    Steinhart, H
    Vogele, M
    Schaaf, HG
    Nimsky, C
    Bale, R
    Zenk, J
    Iro, H
    HNO, 2005, 53 (05) : 446 - +
  • [32] Cross-modal attention for multi-modal image registration
    Song, Xinrui
    Chao, Hanqing
    Xu, Xuanang
    Guo, Hengtao
    Xu, Sheng
    Turkbey, Baris
    Wood, Bradford J.
    Sanford, Thomas
    Wang, Ge
    Yan, Pingkun
    MEDICAL IMAGE ANALYSIS, 2022, 82
  • [33] A Multi-modal Approach for Enhancing Object Placement
    Srimal, P. H. D. Arjuna S.
    Jayasekara, A. G. Buddhika P.
    PROCEEDINGS OF THE 2017 6TH NATIONAL CONFERENCE ON TECHNOLOGY & MANAGEMENT (NCTM) - EXCEL IN RESEARCH AND BUILD THE NATION, 2017, : 17 - 22
  • [34] Multi-modal Queried Object Detection in the Wild
    Xu, Yifan
    Zhang, Mengdan
    Fu, Chaoyou
    Chen, Peixian
    Yang, Xiaoshan
    Li, Ke
    Xu, Changsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [35] Multi-modal Attention for Speech Emotion Recognition
    Pan, Zexu
    Luo, Zhaojie
    Yang, Jichen
    Li, Haizhou
    INTERSPEECH 2020, 2020, : 364 - 368
  • [36] Deep Object Tracking with Multi-modal Data
    Zhang, Xuezhi
    Yuan, Yuan
    Lu, Xiaoqiang
    2016 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2016, : 161 - 165
  • [37] Attention driven multi-modal similarity learning
    Gao, Xinjian
    Mu, Tingting
    Goulermas, John Y.
    Wang, Meng
    INFORMATION SCIENCES, 2018, 432 : 530 - 542
  • [38] A bioinspired multi-modal flying and walking robot
    Daler, Ludovic
    Mintchev, Stefano
    Stefanini, Cesare
    Floreano, Dario
    BIOINSPIRATION & BIOMIMETICS, 2015, 10 (01)
  • [39] RouteMe: A Mobile Recommender System for Personalized, Multi-Modal Route Planning
    Herzog, Daniel
    Massoud, Hesham
    Woerndl, Wolfgang
    PROCEEDINGS OF THE 25TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (UMAP'17), 2017, : 67 - 75
  • [40] Multi-Modal Multi sensor Interaction between Human and Heterogeneous Multi-Robot System
    Al Mahi, S. M.
    ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 524 - 528