Knowledge acquisition through human-robot multimodal interaction

被引:23
|
作者
Randelli, Gabriele [1 ]
Bonanni, Taigo Maria [1 ]
Iocchi, Luca [1 ]
Nardi, Daniele [1 ]
机构
[1] Univ Roma La Sapienza, Dept Comp Control & Management Engn, I-00185 Rome, Italy
关键词
Human-robot interaction; Knowledge representation; Symbol grounding; RECOGNITION;
D O I
10.1007/s11370-012-0123-1
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans' cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment.
引用
收藏
页码:19 / 31
页数:13
相关论文
共 50 条
  • [21] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [22] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    [J]. IEEE SENSORS LETTERS, 2024, 8 (05)
  • [23] Multimodal QOL Estimation During Human-Robot Interaction
    Nakagawa, Satoshi
    Kuniyoshi, Yasuo
    [J]. 2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 23 - 32
  • [24] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [25] Multimodal Target Prediction for Rapid Human-Robot Interaction
    Mitra, Mukund
    Patil, Ameya
    Mothish, G. V. S.
    Kumar, Gyanig
    Mukhopadhyay, Abhishek
    Murthy, L. R. D.
    Chakrabarti, Partha Pratim
    Biswas, Pradipta
    [J]. COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 18 - 23
  • [26] Shared Knowledge in Human-Robot Interaction (HRI)
    Miraglia, Laura
    Di Dio, Cinzia
    Manzi, Federico
    Kanda, Takayuki
    Cangelosi, Angelo
    Itakura, Shoji
    Ishiguro, Hiroshi
    Massaro, Davide
    Fonagy, Peter
    Marchetti, Antonella
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (01) : 59 - 75
  • [27] Spatial knowledge representation for human-robot interaction
    Moratz, R
    Tenbrink, T
    Bateman, J
    Fischer, K
    [J]. SPATIAL COGNITION III, 2003, 2685 : 263 - 286
  • [28] Shared Knowledge in Human-Robot Interaction (HRI)
    Laura Miraglia
    Cinzia Di Dio
    Federico Manzi
    Takayuki Kanda
    Angelo Cangelosi
    Shoji Itakura
    Hiroshi Ishiguro
    Davide Massaro
    Peter Fonagy
    Antonella Marchetti
    [J]. International Journal of Social Robotics, 2024, 16 : 59 - 75
  • [29] A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot
    Holzapfel, Hartwig
    [J]. INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2008, 35 (06): : 528 - 535
  • [30] Human-Robot Interaction in Concept Acquisition: a computational model
    de Greeff, Joachim
    Delaunay, Frederic
    Belpaeme, Tony
    [J]. 2009 IEEE 8TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, 2009, : 168 - 173