Multi-modal interaction of human and home robot in the context of room map generation

被引:18
|
作者
Ghidary, SS [1 ]
Nakata, Y [1 ]
Saito, H [1 ]
Hattori, M [1 ]
Takamori, T [1 ]
机构
[1] Kobe Univ, Fac Engn, Dept Comp Syst, Kobe, Hyogo, Japan
关键词
human-robot interaction; human detection; object localization; robot positioning; map generation;
D O I
10.1023/A:1019689509522
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In robotics, the idea of human and robot interaction is receiving a lot of attention lately. In this paper, we describe a multi-modal system for generating a map of the environment through interaction of a human and home robot. This system enables people to teach a newcomer robot different attributes of objects and places in the room through speech commands and hand gestures. The robot learns about size, position, and topological relations between objects, and produces a map of the room based on knowledge learned through communication with the human. The developed system consists of several sections including: natural language processing, posture recognition, object localization and map generation. This system combines multiple sources of information and model matching to detect and track a human hand so that the user can point toward an object of interest and guide the robot to either go near it or to locate that object's position in the room. The positions of objects in the room are located by monocular camera vision and depth from focus method.
引用
收藏
页码:169 / 184
页数:16
相关论文
共 50 条
  • [31] Context-aware selection of multi-modal conversational fillers in human-robot dialogues
    Galle, Matthias
    Kynev, Ekaterina
    Monet, Nicolas
    Legras, Christophe
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 317 - 322
  • [32] Things that see: Context-aware multi-modal interaction
    Crowley, James L.
    COGNITIVE VISION SYSTEMS: SAMPLING THE SPECTRUM OF APPROACHERS, 2006, 3948 : 183 - 198
  • [33] Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction
    Gross, Stephanie
    Krenn, Brigitte
    Scheutz, Matthias
    INTERACTION STUDIES, 2016, 17 (02) : 180 - 210
  • [34] A Multi-party Multi-modal Dataset for Focus of Visual Attention in Human-human and Human-robot Interaction
    Stefanov, Kalin
    Beskow, Jonas
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 4440 - 4444
  • [35] Model Predictive Control with Gaussian Processes for Flexible Multi-Modal Physical Human Robot Interaction
    Haninger, Kevin
    Hegeler, Christian
    Peternel, Luka
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6948 - 6955
  • [36] A Multi-Modal and Collaborative Human–Machine Interface for a Walking Robot
    J. Estremera
    E. Garcia
    P. Gonzalez de Santos
    Journal of Intelligent and Robotic Systems, 2002, 35 : 397 - 425
  • [37] An Iterative Interaction-Design Method for Multi-Modal Robot Communication
    Saad, Elie
    Broekens, Joost
    Neerincx, Mark A.
    2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 690 - 697
  • [38] Multi-Modal Interaction Device
    Kim, Yul Hee
    Byeon, Sang-Kyu
    Kim, Yu-Joon
    Choi, Dong-Soo
    Kim, Sang-Youn
    INTERNATIONAL CONFERENCE ON MECHANICAL DESIGN, MANUFACTURE AND AUTOMATION ENGINEERING (MDMAE 2014), 2014, : 327 - 330
  • [39] Multi-modal interaction in biomedicine
    Zudilova, EV
    Sloot, PMA
    AMBIENT INTELLIGENCE FOR SCIENTIFIC DISCOVERY: FOUNDATIONS, THEORIES, AND SYSTEMS, 2005, 3345 : 184 - 201
  • [40] Multi-modal Controls of A Smart Robot
    Mishra, Anurag
    Makula, Pooja
    Kumar, Akshay
    Karan, Krit
    Mittal, V. K.
    2015 ANNUAL IEEE INDIA CONFERENCE (INDICON), 2015,