Multi-Agent Common Knowledge Reinforcement Learning

被引:0
|
作者
de Witt, Christian A. Schroeder [1 ]
Foerster, Jakob N. [1 ]
Farquhar, Gregory [1 ]
Torr, Philip H. S. [1 ]
Boehmer, Wendelin [1 ]
Whiteson, Shimon [1 ]
机构
[1] Univ Oxford, Oxford, England
基金
英国工程与自然科学研究理事会; 欧洲研究理事会; 美国国家卫生研究院;
关键词
COORDINATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multiagent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Multi-Agent Reinforcement Learning
    Stankovic, Milos
    [J]. 2016 13TH SYMPOSIUM ON NEURAL NETWORKS AND APPLICATIONS (NEUREL), 2016, : 43 - 43
  • [2] Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks
    Shi, Daming
    Tong, Junbo
    Liu, Yi
    Fan, Wenhui
    [J]. ENTROPY, 2022, 24 (04)
  • [3] A configuration of multi-agent reinforcement learning integrating prior knowledge
    Tang, Hainan
    Tang, Hongjie
    Liu, Juntao
    Rao, Ziyun
    Zhang, Yunshu
    Luo, Xunhao
    [J]. 2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [4] Improving Multi-agent Reinforcement Learning with Imperfect Human Knowledge
    Han, Xiaoxu
    Tang, Hongyao
    Li, Yuan
    Kou, Guang
    Liu, Leilei
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 369 - 380
  • [5] KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning
    Gao, Zijian
    Xu, Kele
    Ding, Bo
    Wang, Huaimin
    [J]. ENTROPY, 2021, 23 (08)
  • [6] Deep Multi-Task Multi-Agent Reinforcement Learning With Knowledge Transfer
    Mai Y.
    Zang Y.
    Yin Q.
    Ni W.
    Huang K.
    [J]. IEEE Transactions on Games, 2024, 16 (03) : 1 - 11
  • [7] Revisiting Some Common Practices in Cooperative Multi-Agent Reinforcement Learning
    Fu, Wei
    Yu, Chao
    Xu, Zelai
    Yang, Jiaqi
    Wu, Yi
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Multi-Agent Reinforcement Learning with Common Policy for Antenna Tilt Optimization
    Mendo, Adriano
    Outes-Carnero, Jose
    Ng-Molina, Yak
    Ramiro-Moreno, Juan
    [J]. IAENG International Journal of Computer Science, 2023, 50 (03)
  • [9] Cooperative Multi-Agent Reinforcement Learning with Conversation Knowledge for Dialogue Management
    Lei, Shuyu
    Wang, Xiaojie
    Yuan, Caixia
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (08):
  • [10] Knowledge distillation for portfolio management using multi-agent reinforcement learning
    Chen, Min-You
    Chen, Chiao-Ting
    Huang, Szu-Hao
    [J]. ADVANCED ENGINEERING INFORMATICS, 2023, 57