Knowledge acquisition through human-robot multimodal interaction

被引:23
|
作者
Randelli, Gabriele [1 ]
Bonanni, Taigo Maria [1 ]
Iocchi, Luca [1 ]
Nardi, Daniele [1 ]
机构
[1] Univ Roma La Sapienza, Dept Comp Control & Management Engn, I-00185 Rome, Italy
关键词
Human-robot interaction; Knowledge representation; Symbol grounding; RECOGNITION;
D O I
10.1007/s11370-012-0123-1
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans' cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment.
引用
收藏
页码:19 / 31
页数:13
相关论文
共 50 条
  • [41] Multimodal Uncertainty Reduction for Intention Recognition in Human-Robot Interaction
    Trick, Susanne
    Koert, Dorothea
    Peters, Jan
    Rothkopf, Constantin A.
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7009 - 7016
  • [42] Human-robot Interaction Control through Demonstration
    Lyu, Shangke
    Cheah, Chien Chern
    [J]. 2018 26TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2018, : 1 - 6
  • [43] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [44] A Multimodal Emotion Detection System during Human-Robot Interaction
    Alonso-Martin, Fernando
    Malfaz, Maria
    Sequeira, Joao
    Gorostiza, Javier F.
    Salichs, Miguel A.
    [J]. SENSORS, 2013, 13 (11) : 15549 - 15581
  • [45] Multimodal Human-Robot Interaction for Walker-Assisted Gait
    Cifuentes, Carlos A.
    Rodriguez, Camilo
    Frizera-Neto, Anselmo
    Bastos-Filho, Teodiano Freire
    Carelli, Ricardo
    [J]. IEEE SYSTEMS JOURNAL, 2016, 10 (03): : 933 - 943
  • [46] A Gesture-based Multimodal Interface for Human-Robot Interaction
    Uimonen, Mikael
    Kemppi, Paul
    Hakanen, Taru
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 165 - 170
  • [47] Multimodal Fusion as Communicative Acts during Human-Robot Interaction
    Alonso-Martin, Fernando
    Gorostiza, Javier F.
    Malfaz, Maria
    Salichs, Miguel A.
    [J]. CYBERNETICS AND SYSTEMS, 2013, 44 (08) : 681 - 703
  • [48] Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    [J]. 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 2986 - 2993
  • [49] Interpreting and Extracting Open Knowledge for Human-Robot Interaction
    Dongcai Lu
    Xiaoping Chen
    [J]. IEEE/CAA Journal of Automatica Sinica, 2017, 4 (04) : 686 - 695
  • [50] Interpreting and Extracting Open Knowledge for Human-Robot Interaction
    Lu, Dongcai
    Chen, Xiaoping
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2017, 4 (04) : 686 - 695