Knowledge acquisition through human-robot multimodal interaction

被引:23
|
作者
Randelli, Gabriele [1 ]
Bonanni, Taigo Maria [1 ]
Iocchi, Luca [1 ]
Nardi, Daniele [1 ]
机构
[1] Univ Roma La Sapienza, Dept Comp Control & Management Engn, I-00185 Rome, Italy
关键词
Human-robot interaction; Knowledge representation; Symbol grounding; RECOGNITION;
D O I
10.1007/s11370-012-0123-1
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The limited understanding of the surrounding environment still restricts the capabilities of robotic systems in real world applications. Specifically, the acquisition of knowledge about the environment typically relies only on perception, which requires intensive ad hoc training and is not sufficiently reliable in a general setting. In this paper, we aim at integrating new acquisition devices, such as tangible user interfaces, speech technologies and vision-based systems, with established AI methodologies, to present a novel and effective knowledge acquisition approach. A natural interaction paradigm is presented, where humans move within the environment with the robot and easily acquire information by selecting relevant spots, objects, or other relevant landmarks. The synergy between novel interaction technologies and semantic knowledge leverages humans' cognitive skills to support robots in acquiring and grounding knowledge about the environment; such richer representation can be exploited in the realization of robot autonomous skills for task accomplishment.
引用
收藏
页码:19 / 31
页数:13
相关论文
共 50 条
  • [1] Knowledge acquisition through human–robot multimodal interaction
    Gabriele Randelli
    Taigo Maria Bonanni
    Luca Iocchi
    Daniele Nardi
    [J]. Intelligent Service Robotics, 2013, 6 : 19 - 31
  • [2] Knowledge acquisition through introspection in Human-Robot Cooperation
    Chella, Antonio
    Lanza, Francesco
    Pipitone, Arianna
    Seidita, Valeria
    [J]. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2018, 25 : 1 - 7
  • [3] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [4] Stepwise Acquisition of Dialogue Act Through Human-Robot Interaction
    Matsushima, Akane
    Kanajiri, Ryosuke
    Hattori, Yusuke
    Fukada, Chie
    Oka, Natsuki
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [5] Development and Testing of a Multimodal Acquisition Platform for Human-Robot Interaction Affective Studies
    Lazzeri, Nicole
    Mazzei, Daniele
    De Rossi, Danilo
    [J]. JOURNAL OF HUMAN-ROBOT INTERACTION, 2014, 3 (02): : 1 - 24
  • [6] Dual Track Multimodal Automatic Learning through Human-Robot Interaction
    Jiang, Shuqiang
    Min, Weiqing
    Li, Xue
    Wang, Huayang
    Sun, Jian
    Zhou, Jiaqi
    [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4485 - 4491
  • [7] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [8] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204
  • [9] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    [J]. 2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [10] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    [J]. SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252