Shapley Counterfactual Credits for Multi-Agent Reinforcement Learning

被引:13
|
作者
Li, Jiahui [1 ]
Kuang, Kun [1 ]
Wang, Baoxiang [2 ,3 ]
Liu, Furui [4 ]
Chen, Long [1 ,5 ]
Wu, Fei [1 ]
Xiao, Jun [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, DCD Lab, Hangzhou, Peoples R China
[2] Chinese Univ Hong Kong, Shenzhen, Peoples R China
[3] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[4] Huawei Noahs Ark Lab, Shanghai, Peoples R China
[5] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
Shapley Value; Counterfactual Thinking; Multi-Agent Systems; Reinforcement Learning; Credit Assignment; COORDINATION;
D O I
10.1145/3447548.3467420
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Centralized Training with Decentralized Execution (CTDE) has been a popular paradigm in cooperative Multi-Agent Reinforcement Learning (MARL) settings and is widely used in many real applications. One of the major challenges in the training process is credit assignment, which aims to deduce the contributions of each agent according to the global rewards. Existing credit assignment methods focus on either decomposing the joint value function into individual value functions or measuring the impact of local observations and actions on the global value function. These approaches lack a thorough consideration of the complicated interactions among multiple agents, leading to an unsuitable assignment of credit and subsequently mediocre results on MARL. We propose Shapley Counterfactual Credit Assignment, a novel method for explicit credit assignment which accounts for the coalition of agents. Specifically, Shapley Value and its desired properties are leveraged in deep MARL to credit any combinations of agents, which grants us the capability to estimate the individual credit for each agent. Despite this capability, the main technical difficulty lies in the computational complexity of Shapley Value who grows factorially as the number of agents. We instead utilize an approximation method via Monte Carlo sampling, which reduces the sample complexity while maintaining its effectiveness. We evaluate our method on StarCraft II benchmarks across different scenarios. Our method outperforms existing cooperative MARL algorithms significantly and achieves the state-of-the-art, with especially large margins on tasks with more severe difficulties.
引用
收藏
页码:934 / 942
页数:9
相关论文
共 50 条
  • [21] Aggregation Transfer Learning for Multi-Agent Reinforcement learning
    Xu, Dongsheng
    Qiao, Peng
    Dou, Yong
    [J]. 2021 2ND INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE 2021), 2021, : 547 - 551
  • [22] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [23] Consensus Learning for Cooperative Multi-Agent Reinforcement Learning
    Xu, Zhiwei
    Zhang, Bin
    Li, Dapeng
    Zhang, Zeren
    Zhou, Guangchong
    Chen, Hao
    Fan, Guoliang
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11726 - 11734
  • [24] Learning structured communication for multi-agent reinforcement learning
    Junjie Sheng
    Xiangfeng Wang
    Bo Jin
    Junchi Yan
    Wenhao Li
    Tsung-Hui Chang
    Jun Wang
    Hongyuan Zha
    [J]. Autonomous Agents and Multi-Agent Systems, 2022, 36
  • [25] Learning structured communication for multi-agent reinforcement learning
    Sheng, Junjie
    Wang, Xiangfeng
    Jin, Bo
    Yan, Junchi
    Li, Wenhao
    Chang, Tsung-Hui
    Wang, Jun
    Zha, Hongyuan
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2022, 36 (02)
  • [26] Generalized learning automata for multi-agent reinforcement learning
    De Hauwere, Yann-Michael
    Vrancx, Peter
    Nowe, Ann
    [J]. AI COMMUNICATIONS, 2010, 23 (04) : 311 - 324
  • [27] Parallel and distributed multi-agent reinforcement learning
    Kaya, M
    Arslan, A
    [J]. PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, 2001, : 437 - 441
  • [28] Coding for Distributed Multi-Agent Reinforcement Learning
    Wang, Baoqian
    Xie, Junfei
    Atanasov, Nikolay
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 10625 - 10631
  • [29] Multi-agent Reinforcement Learning for Service Composition
    Lei, Yu
    Yu, Philip S.
    [J]. PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON SERVICES COMPUTING (SCC 2016), 2016, : 790 - 793
  • [30] Reinforcement learning of multi-agent communicative acts
    Hoet S.
    Sabouret N.
    [J]. Revue d'Intelligence Artificielle, 2010, 24 (02) : 159 - 188