Multi-modal human robot interaction for map generation

被引:0
|
作者
Saito, H [1 ]
Ishimura, K [1 ]
Hattori, M [1 ]
Takamori, T [1 ]
机构
[1] Kobe Univ, Kobe, Hyogo 6578501, Japan
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes an interface for multi modal human-robot interaction which enables people to introduce a newcomer robot about different attributes of objects and places in the room through speech commands and hand gestures. Robot generates in environment based on knowledge learned through communication with human and uses this map for navigation.
引用
收藏
页码:2721 / 2724
页数:4
相关论文
共 50 条
  • [1] Multi-modal human robot interaction for map generation
    Ghidary, SS
    Nakata, Y
    Saito, H
    Hattori, M
    Takamori, T
    [J]. IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 2246 - 2251
  • [2] Multi-modal interaction of human and home robot in the context of room map generation
    Ghidary, SS
    Nakata, Y
    Saito, H
    Hattori, M
    Takamori, T
    [J]. AUTONOMOUS ROBOTS, 2002, 13 (02) : 169 - 184
  • [3] Multi-Modal Interaction of Human and Home Robot in the Context of Room Map Generation
    Saeed Shiry Ghidary
    Yasushi Nakata
    Hiroshi Saito
    Motofumi Hattori
    Toshi Takamori
    [J]. Autonomous Robots, 2002, 13 : 169 - 184
  • [4] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [5] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    [J]. 2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [6] Multi-modal interfaces for natural Human-Robot Interaction
    Andronas, Dionisis
    Apostolopoulos, George
    Fourtakas, Nikos
    Makris, Sotiris
    [J]. 10TH CIRP SPONSORED CONFERENCE ON DIGITAL ENTERPRISE TECHNOLOGIES (DET 2020) - DIGITAL TECHNOLOGIES AS ENABLERS OF INDUSTRIAL COMPETITIVENESS AND SUSTAINABILITY, 2021, 54 : 197 - 202
  • [7] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    [J]. COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [8] Human-Robot Interaction with Multi-Human Social Pattern Inference on a Multi-Modal Robot
    Tseng, Shih-Huan
    Wu, Tung-Yen
    Cheng, Ching-Ying
    Fu, Li-Chen
    [J]. 2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 819 - 824
  • [9] Continuous Multi-Modal Interaction Causes Human-Robot Alignment
    Wallkotter, Sebastian
    Joannou, Michael
    Westlake, Samuel
    Belphaeme, Tony
    [J]. PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 375 - 379
  • [10] Generation method of a robot task program for multi-modal human-robot interface
    Aramaki, Shigeto
    Nagai, Tatsuichirou
    Yayoshi, Koutarou
    Tsuruoka, Tomoaki
    Kawamura, Masato
    Kurono, Shigeru
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS 1-6, 2006, : 1450 - +