Action learning and grounding in simulated human-robot interactions

被引:6
|
作者
Roesler, Oliver [1 ]
Nowe, Ann [1 ]
机构
[1] Vrije Univ Brussel, Artificial Intelligence Lab, Pl Laan 9, B-1050 Brussels, Belgium
来源
关键词
MANIPULATION;
D O I
10.1017/S0269888919000079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In order to enable robots to interact with humans in a natural way, they need to be able to autonomously learn new tasks. The most natural way for humans to tell another agent, which can be a human or robot, to perform a task is via natural language. Thus, natural human-robot interactions also require robots to understand natural language, i.e. extract the meaning of words and phrases. To do this, words and phrases need to be linked to their corresponding percepts through grounding. Afterward, agents can learn the optimal micro-action patterns to reach the goal states of the desired tasks. Most previous studies investigated only learning of actions or grounding of words, but not both. Additionally, they often used only a small set of tasks as well as very short and unnaturally simplified utterances. In this paper, we introduce a framework that uses reinforcement learning to learn actions for several tasks and cross-situational learning to ground actions, object shapes and colors, and prepositions. The proposed framework is evaluated through a simulated interaction experiment between a human tutor and a robot. The results show that the employed framework can be used for both action learning and grounding.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] A Cross-Situational Learning Based Framework for Grounding of Synonyms in Human-Robot Interactions
    Roesler, Oliver
    [J]. FOURTH IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, ROBOT 2019, VOL 2, 2020, 1093 : 225 - 236
  • [2] Errors in Human-Robot Interactions and Their Effects on Robot Learning
    Kim, Su Kyoung
    Kirchner, Elsa Andrea
    Schlossmueller, Lukas
    Kirchner, Frank
    [J]. FRONTIERS IN ROBOTICS AND AI, 2020, 7
  • [3] Unsupervised Online Grounding of Natural Language during Human-Robot Interactions
    Roesler, Oliver
    [J]. PROCEEDINGS OF THE SECOND GRAND CHALLENGE AND WORKSHOP ON MULTIMODAL LANGUAGE (CHALLENGE-HML), VOL 1, 2020, : 35 - 45
  • [4] Affective Grounding in Human-Robot Interaction
    Jung, Malte F.
    [J]. PROCEEDINGS OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, : 263 - 273
  • [5] Human-robot interactions
    You, Sangseok
    Robert, Lionel P.
    [J]. Proceedings of the Annual Hawaii International Conference on System Sciences, 2020, 2020-January
  • [6] Learning from Human-Robot Interactions in Modeled Scenes
    Murnane, Mark
    Breitmeyer, Max
    Ferraro, Francis
    Matuszek, Cynthia
    Engel, Don
    [J]. SIGGRAPH '19 - ACM SIGGRAPH 2019 POSTERS, 2019,
  • [7] Learning Legible Motion from Human-Robot Interactions
    Busch, Baptiste
    Grizou, Jonathan
    Lopes, Manuel
    Stulp, Freek
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2017, 9 (05) : 765 - 779
  • [8] Simulating Human-Robot Interactions for Dialogue Strategy Learning
    Milliez, Gregoire
    Ferreira, Emmanuel
    Fiore, Michelangelo
    Alami, Rachid
    Lefevre, Fabrice
    [J]. SIMULATION, MODELING, AND PROGRAMMING FOR AUTONOMOUS ROBOTS (SIMPAR 2014), 2014, 8810 : 62 - 73
  • [9] Grounding Spatial Relations for Human-Robot Interaction
    Guadarrama, Sergio
    Riano, Lorenzo
    Golland, Dave
    Goehring, Daniel
    Jia, Yangqing
    Klein, Dan
    Abbeel, Pieter
    Darrell, Trevor
    [J]. 2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 1640 - 1647
  • [10] Trust in human-robot interactions
    Lazanyi, Kornelia
    Hajdu, Beata
    [J]. 2017 IEEE 14TH INTERNATIONAL SCIENTIFIC CONFERENCE ON INFORMATICS, 2017, : 216 - 220