A multi-modal object attention system for a mobile robot

被引:20
|
作者
Haasch, A [1 ]
Hofemann, N [1 ]
Fritsch, J [1 ]
Sagerer, G [1 ]
机构
[1] Univ Bielefeld, Fac Technol, D-33594 Bielefeld, Germany
关键词
object attention; human-robot interaction; robot companion;
D O I
10.1109/IROS.2005.1545191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robot companions are intended for operation in private homes with naive users. For this purpose, they need to be endowed with natural interaction capabilities. Additionally, such robots will need to be taught unknown objects that are present in private homes. We present a multi-modal object attention system that is able to identify objects referenced by the user with gestures and verbal instructions. The proposed system can detect known and unknown objects and stores newly acquired object information in a scene model for later retrieval. This way, the growing knowledge base of the robot companion improves the interaction quality as the robot can more easily focus its attention on objects it has been taught previously.
引用
收藏
页码:1499 / 1504
页数:6
相关论文
共 50 条
  • [41] Object Interaction Recommendation with Multi-Modal Attention-based Hierarchical Graph Neural Network
    Zhang, Huijuan
    Liang, Lipeng
    Wang, Dongqing
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 295 - 305
  • [42] Towards multi-modal mobile telepresence and telemanipulation
    Reinicke, C
    Buss, M
    INTERNET CHALLENGE: TECHNOLOGY AND APPLICATIONS, 2002, : 55 - 62
  • [43] Robot System Assistant (RoSA): concept for an intuitive multi-modal and multi-device interaction system
    Strazdas, Dominykas
    Hintz, Jan
    Khalifa, Aly
    Al-Hamadi, Ayoub
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2021, : 247 - 250
  • [44] An attention based multi-modal gender identification system for social media users
    Suman, Chanchal
    Chaudhary, Rohit Shyamkant
    Saha, Sriparna
    Bhattacharyya, Pushpak
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (19) : 27033 - 27055
  • [45] An attention based multi-modal gender identification system for social media users
    Chanchal Suman
    Rohit Shyamkant Chaudhary
    Sriparna Saha
    Pushpak Bhattacharyya
    Multimedia Tools and Applications, 2022, 81 : 27033 - 27055
  • [46] OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over
    Stephan, Benedict
    Koehler, Mona
    Mueller, Steffen
    Zhang, Yan
    Gross, Horst-Michael
    Notni, Gunther
    SENSORS, 2023, 23 (18)
  • [47] A Multi-modal Gesture Recognition System in a Human-Robot Interaction Scenario
    Li, Zhi
    Jarvis, Ray
    2009 IEEE INTERNATIONAL WORKSHOP ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2009), 2009, : 41 - 46
  • [48] Contextual Inter-modal Attention for Multi-modal Sentiment Analysis
    Ghosal, Deepanway
    Akhtar, Md Shad
    Chauhan, Dushyant
    Poria, Soujanya
    Ekbalt, Asif
    Bhattacharyyat, Pushpak
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3454 - 3466
  • [49] Mixture of Attention Variants for Modal Fusion in Multi-Modal Sentiment Analysis
    He, Chao
    Zhang, Xinghua
    Song, Dongqing
    Shen, Yingshan
    Mao, Chengjie
    Wen, Huosheng
    Zhu, Dingju
    Cai, Lihua
    BIG DATA AND COGNITIVE COMPUTING, 2024, 8 (02)
  • [50] Deep Multi-modal Object Detection for Autonomous Driving
    Ennajar, Amal
    Khouja, Nadia
    Boutteau, Remi
    Tlili, Fethi
    2021 18TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS & DEVICES (SSD), 2021, : 7 - 11