Modelling Shared Attention Through Relational Reinforcement Learning

被引:6
|
作者
da Silva, Renato Ramos [1 ]
Francelin Romero, Roseli Aparecida [1 ]
机构
[1] Univ Sao Paulo, Inst Ciencias Matemat & Comp, BR-13560970 Sao Carlos, SP, Brazil
基金
巴西圣保罗研究基金会;
关键词
Shared attention; Relational reinforcement learning; Social robotics;
D O I
10.1007/s10846-011-9624-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Shared attention is a type of communication very important among human beings. It is sometimes reserved for the more complex form of communication being constituted by a sequence of four steps: mutual gaze, gaze following, imperative pointing and declarative pointing. Some approaches have been proposed in Human-Robot Interaction area to solve part of shared attention process, that is, the most of works proposed try to solve the first two steps. Models based on temporal difference, neural networks, probabilistic and reinforcement learning are methods used in several works. In this article, we are presenting a robotic architecture that provides a robot or agent, the capacity of learning mutual gaze, gaze following and declarative pointing using a robotic head interacting with a caregiver. Three learning methods have been incorporated to this architecture and a comparison of their performance has been done to find the most adequate to be used in real experiment. The learning capabilities of this architecture have been analyzed by observing the robot interacting with the human in a controlled environment. The experimental results show that the robotic head is able to produce appropriate behavior and to learn from sociable interaction.
引用
收藏
页码:167 / 182
页数:16
相关论文
共 50 条
  • [1] Modelling Shared Attention Through Relational Reinforcement Learning
    Renato Ramos da Silva
    Roseli Aparecida Francelin Romero
    Journal of Intelligent & Robotic Systems, 2012, 66 : 167 - 182
  • [2] Relational Reinforcement Learning applied to Shared Attention
    da Silva, Renato R.
    Policastro, Claudio A.
    Romero, Roseli A. F.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 1074 - 1080
  • [3] Guiding inference through relational reinforcement learning
    Asgharbeygi, N
    Nejati, N
    Langley, P
    Arai, S
    INDUCTIVE LOGIC PROGRAMMING, PROCEEDINGS, 2005, 3625 : 20 - 37
  • [4] Reinforcement Learning With Multiple Relational Attention for Solving Vehicle Routing Problems
    Xu, Yunqiu
    Fang, Meng
    Chen, Ling
    Xu, Gangyan
    Du, Yali
    Zhang, Chengqi
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (10) : 11107 - 11120
  • [5] Relational reinforcement learning
    Driessens, K
    AI COMMUNICATIONS, 2005, 18 (01) : 71 - 73
  • [6] Relational reinforcement learning
    Driessens, K
    MULTI-AGENT SYSTEMS AND APPLICATIONS, 2001, 2086 : 271 - 280
  • [7] Relational reinforcement learning
    Dzeroski, S
    De Raedt, L
    Driessens, K
    MACHINE LEARNING, 2001, 43 (1-2) : 7 - 52
  • [8] Relational Reinforcement Learning
    Sašo Džeroski
    Luc De Raedt
    Kurt Driessens
    Machine Learning, 2001, 43 : 7 - 52
  • [9] Learning relational options for inductive transfer in relational reinforcement learning
    Croonenborghs, Tom
    Driessens, Kurt
    Bruynooghe, Maurice
    INDUCTIVE LOGIC PROGRAMMING, 2008, 4894 : 88 - 97
  • [10] An Enhancement of Relational Reinforcement Learning
    da Silva, Renato R.
    Policastro, Claudio A.
    Romero, Roseli A. F.
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 2055 - 2060