Multi-modal interaction of human and home robot in the context of room map generation

被引:18
|
作者
Ghidary, SS [1 ]
Nakata, Y [1 ]
Saito, H [1 ]
Hattori, M [1 ]
Takamori, T [1 ]
机构
[1] Kobe Univ, Fac Engn, Dept Comp Syst, Kobe, Hyogo, Japan
关键词
human-robot interaction; human detection; object localization; robot positioning; map generation;
D O I
10.1023/A:1019689509522
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.
引用
收藏
页码:169 / 184
页数:16
相关论文
共 50 条
  • [21] Multi-modal human-robot interface for interaction with a remotely operating mobile service robot
    Fischer, C
    Schmidt, G
    ADVANCED ROBOTICS, 1998, 12 (04) : 397 - 409
  • [22] Are You Sure? - Multi-Modal Human Decision Uncertainty Detection in Human-Robot Interaction
    Scherf, Lisa
    Gasche, Lisa Alina
    Chemangui, Eya
    Koert, Dorothea
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024, 2024, : 621 - 629
  • [23] A Probabilistic Approach for Attention-Based Multi-Modal Human-Robot Interaction
    Begum, Momotaz
    Karray, Fakhri
    Mann, George K. I.
    Gosine, Raymond
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 909 - +
  • [24] Designing and Implementing a Platform for Collecting Multi-Modal Data of Human-Robot Interaction
    Vaughan, Brian
    Han, Jing Guang
    Gilmartin, Emer
    Campbell, Nick
    ACTA POLYTECHNICA HUNGARICA, 2012, 9 (01) : 7 - 17
  • [25] Editorial: Integrated Multi-modal and Sensorimotor Coordination for Enhanced Human-Robot Interaction
    Fang, Bin
    Fang, Cheng
    Wen, Li
    Manoonpong, Poramate
    Fang, Bin (fangbin@tsinghua.edu.cn), 1600, Frontiers Media S.A. (15):
  • [26] A Multi-modal Human Robot Interaction Framework based on Cognitive Behavioral Therapy Model
    Rastogi, Neelesh
    Keshtkar, Fazel
    Miah, Md Suruz
    PROCEEDINGS OF THE WORKSHOP ON HUMAN-HABITAT FOR HEALTH (H3'18): HUMAN-HABITAT MULTIMODAL INTERACTION FOR PROMOTING HEALTH AND WELL-BEING IN THE INTERNET OF THINGS ERA, 2018,
  • [27] Editorial: Integrated Multi-modal and Sensorimotor Coordination for Enhanced Human-Robot Interaction
    Fang, Bin
    Fang, Cheng
    Wen, Li
    Manoonpong, Poramate
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [28] Multi-Modal Humanoid Robot
    Thoshith, S.
    Mulgund, Samarth
    Sindgi, Praveen
    Yogesh, N.
    Kumaraswamy, R.
    2018 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), 2018,
  • [29] Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
    Strazdas, Dominykas
    Hintz, Jan
    Khalifa, Aly
    Abdelrahman, Ahmed A.
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    SENSORS, 2022, 22 (03)
  • [30] Multi-modal robot interfaces
    Springer Tracts in Advanced Robotics, 2005, 14 : 5 - 7