Emergent cooperation from mutual acknowledgment exchange in multi-agent reinforcement learning

被引:1
|
作者
Phan, Thomy [1 ,2 ]
Sommer, Felix [2 ]
Ritz, Fabian [2 ]
Altmann, Philipp [2 ]
Nuesslein, Jonas [2 ]
Koelle, Michael [2 ]
Belzner, Lenz [3 ]
Linnhoff-Popien, Claudia [2 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
[2] Ludwig Maximilians Univ Munchen, Munich, Germany
[3] TH Ingolstadt, Ingolstadt, Germany
关键词
Multi-agent learning; Reinforcement learning; Mutual acknowledgments; Peer incentivization; Emergent cooperation; EVOLUTION; LEVEL;
D O I
10.1007/s10458-024-09666-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Peer incentivization (PI) is a recent approach where all agents learn to reward or penalize each other in a distributed fashion, which often leads to emergent cooperation. Current PI mechanisms implicitly assume a flawless communication channel in order to exchange rewards. These rewards are directly incorporated into the learning process without any chance to respond with feedback. Furthermore, most PI approaches rely on global information, which limits scalability and applicability to real-world scenarios where only local information is accessible. In this paper, we propose Mutual Acknowledgment Token Exchange (MATE), a PI approach defined by a two-phase communication protocol to exchange acknowledgment tokens as incentives to shape individual rewards mutually. All agents condition their token transmissions on the locally estimated quality of their own situations based on environmental rewards and received tokens. MATE is completely decentralized and only requires local communication and information. We evaluate MATE in three social dilemma domains. Our results show that MATE is able to achieve and maintain significantly higher levels of cooperation than previous PI approaches. In addition, we evaluate the robustness of MATE in more realistic scenarios, where agents can deviate from the protocol and communication failures can occur. We also evaluate the sensitivity of MATE w.r.t. the choice of token values.
引用
收藏
页数:36
相关论文
共 50 条
  • [31] Multi-agent communication cooperation based on deep reinforcement learning and information theory
    Gao, Bing
    Zhang, Zhejie
    Zou, Qijie
    Liu, Zhiguo
    Zhao, Xiling
    Hangkong Xuebao/Acta Aeronautica et Astronautica Sinica, 2024, 45 (18):
  • [32] Enhancing cooperation by cognition differences and consistent representation in multi-agent reinforcement learning
    Hongwei Ge
    Zhixin Ge
    Liang Sun
    Yuxin Wang
    Applied Intelligence, 2022, 52 : 9701 - 9716
  • [33] Enhancing cooperation by cognition differences and consistent representation in multi-agent reinforcement learning
    Ge, Hongwei
    Ge, Zhixin
    Sun, Liang
    Wang, Yuxin
    APPLIED INTELLIGENCE, 2022, 52 (09) : 9701 - 9716
  • [34] Multi-Agent Reinforcement Learning With Distributed Targeted Multi-Agent Communication
    Xu, Chi
    Zhang, Hui
    Zhang, Ya
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 2915 - 2920
  • [35] Multi-Agent Uncertainty Sharing for Cooperative Multi-Agent Reinforcement Learning
    Chen, Hao
    Yang, Guangkai
    Zhang, Junge
    Yin, Qiyue
    Huang, Kaiqi
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [36] Hierarchical multi-agent reinforcement learning
    Mohammad Ghavamzadeh
    Sridhar Mahadevan
    Rajbala Makar
    Autonomous Agents and Multi-Agent Systems, 2006, 13 : 197 - 229
  • [37] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Multi-Agent Reinforcement Learning for Microgrids
    Dimeas, A. L.
    Hatziargyriou, N. D.
    IEEE POWER AND ENERGY SOCIETY GENERAL MEETING 2010, 2010,
  • [39] Multi-agent Exploration with Reinforcement Learning
    Sygkounas, Alkis
    Tsipianitis, Dimitris
    Nikolakopoulos, George
    Bechlioulis, Charalampos P.
    2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 630 - 635
  • [40] Hierarchical multi-agent reinforcement learning
    Ghavamzadeh, Mohammad
    Mahadevan, Sridhar
    Makar, Rajbala
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 197 - 229