Perspective Taking in Deep Reinforcement Learning Agents

被引:8
|
作者
Labash, Aqeel [1 ]
Aru, Jaan [1 ,2 ]
Matiisen, Tambet [1 ]
Tampuu, Ardi [1 ]
Vicente, Raul [1 ]
机构
[1] Univ Tartu, Inst Comp Sci, Computat Neurosci Lab, Tartu, Estonia
[2] Humboldt Univ, Inst Biol, Berlin, Germany
关键词
deep reinforcement learning; theory of mind; perspective taking; multi-agent; artificial intelligence; INDIVIDUAL-DIFFERENCES; EMPATHY; MEMORY;
D O I
10.3389/fncom.2020.00069
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Perspective taking is the ability to take into account what the other agent knows. This skill is not unique to humans as it is also displayed by other animals like chimpanzees. It is an essential ability for social interactions, including efficient cooperation, competition, and communication. Here we present our progress toward building artificial agents with such abilities. We implemented a perspective taking task inspired by experiments done with chimpanzees. We show that agents controlled by artificial neural networks can learn via reinforcement learning to pass simple tests that require some aspects of perspective taking capabilities. We studied whether this ability is more readily learned by agents with information encoded in allocentric or egocentric form for both their visual perception and motor actions. We believe that, in the long run, building artificial agents with perspective taking ability can help us develop artificial intelligence that is more human-like and easier to communicate with.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents
    Rossi, Leonardo de Lellis
    Rohmer, Eric
    Costa, Paula Dornhofer Paro
    Colombini, Esther Luna
    Simoes, Alexandre da Silva
    Gudwin, Ricardo Ribeiro
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2024, 110 (01)
  • [22] Jointly Learning to Construct and Control Agents using Deep Reinforcement Learning
    Schaff, Charles
    Yunis, David
    Chakrabarti, Ayan
    Walter, Matthew R.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 9798 - 9805
  • [23] Privacy preservation in deep reinforcement learning: A training perspective
    Shen, Sheng
    Ye, Dayong
    Zhu, Tianqing
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [24] A Distributional Perspective on Multiagent Cooperation With Deep Reinforcement Learning
    Huang, Liwei
    Fu, Mingsheng
    Rao, Ananya
    Irissappane, Athirai A.
    Zhang, Jie
    Xu, Chengzhong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 4246 - 4259
  • [25] Deep reinforcement learning for process design: Review and perspective
    Gao, Qinghe
    Schweidtmann, Artur M.
    CURRENT OPINION IN CHEMICAL ENGINEERING, 2024, 44
  • [26] A Procedural Constructive Learning Mechanism with Deep Reinforcement Learning for Cognitive Agents
    Leonardo de Lellis Rossi
    Eric Rohmer
    Paula Dornhofer Paro Costa
    Esther Luna Colombini
    Alexandre da Silva Simões
    Ricardo Ribeiro Gudwin
    Journal of Intelligent & Robotic Systems, 2024, 110
  • [27] Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective
    Gogianu, Florin
    Berariu, Tudor
    Rosca, Mihaela
    Clopath, Claudia
    Busoniu, Lucian
    Pascanu, Razvan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] A Deep Reinforcement Learning Perspective on Internet Congestion Control
    Jay, Nathan
    Rotman, Noga H.
    Godfrey, P. Brighten
    Schapira, Michael
    Tamar, Aviv
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [29] Recent Advances of Deep Robotic Affordance Learning: A Reinforcement Learning Perspective
    Yang, Xintong
    Ji, Ze
    Wu, Jing
    Lai, Yu-Kun
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1139 - 1149
  • [30] Analysing deep reinforcement learning agents trained with domain randomisation
    Dai, Tianhong
    Arulkumaran, Kai
    Gerbert, Tamara
    Tukra, Samyakh
    Behbahani, Feryal
    Bharath, Anil Anthony
    NEUROCOMPUTING, 2022, 493 : 143 - 165