Distributional Reward Estimation for Effective Multi-Agent Deep Reinforcement Learning

被引:0
|
作者
Hu, Jifeng [1 ]
Sun, Yanchao [2 ]
Chen, Hechang [1 ]
Huang, Sili [1 ]
Piao, Haiyin [3 ]
Chang, Yi [1 ]
Sun, Lichao [4 ]
机构
[1] Jlilin Univ, Sch Artificial Intelligence, Changchun, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[3] Northwestern Polytech Univ, Xian, Peoples R China
[4] Lehigh Univ, Bethlehem, PA USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-agent reinforcement learning has drawn increasing attention in practice, e.g., robotics and automatic driving, as it can explore optimal policies using samples generated by interacting with the environment. However, high reward uncertainty still remains a problem when we want to train a satisfactory model, because obtaining high-quality reward feedback is usually expensive and even infeasible. To handle this issue, previous methods mainly focus on passive reward correction. At the same time, recent active reward estimation methods have proven to be a recipe for reducing the effect of reward uncertainty. In this paper, we propose a novel Distributional Reward Estimation framework for effective Multi-Agent Reinforcement Learning (DRE-MARL). Our main idea is to design the multi-action-branch reward estimation and policy-weighted reward aggregation for stabilized training. Specifically, we design the multi-action-branch reward estimation to model reward distributions on all action branches. Then we utilize reward aggregation to obtain stable updating signals during training. Our intuition is that consideration of all possible consequences of actions could be useful for learning policies. The superiority of the DRE-MARL is demonstrated using benchmark multi-agent scenarios, compared with the SOTA baselines in terms of both effectiveness and robustness.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Cooperative Multi-Agent Deep Reinforcement Learning with Counterfactual Reward
    Shao, Kun
    Zhu, Yuanheng
    Tang, Zhentao
    Zhao, Dongbin
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [2] Multi-Agent Reinforcement Learning with Reward Delays
    Zhang, Yuyang
    Zhang, Runyu
    Gu, Yuantao
    Li, Na
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [3] A fully value distributional deep reinforcement learning framework for multi-agent cooperation
    Fu, Mingsheng
    Huang, Liwei
    Li, Fan
    Qu, Hong
    Xu, Chengzhong
    NEURAL NETWORKS, 2025, 184
  • [4] Robust multi-agent reinforcement learning via Bayesian distributional value estimation
    Du, Xinqi
    Chen, Hechang
    Wang, Che
    Xing, Yongheng
    Yang, Jielong
    Yu, Philip S.
    Chang, Yi
    He, Lifang
    PATTERN RECOGNITION, 2024, 145
  • [5] Direct reward and indirect reward in multi-agent reinforcement learning
    Ohta, M
    ROBOCUP 2002: ROBOT SOCCER WORLD CUP VI, 2003, 2752 : 359 - 366
  • [6] Direct reward and indirect reward in multi-agent reinforcement learning
    Ohta, M. (ohta@carc.aist.go.jp), (Springer Verlag):
  • [7] Multi-Agent Deep Reinforcement Learning With Progressive Negative Reward for Cryptocurrency Trading
    Kumlungmak, Kittiwin
    Vateekul, Peerapon
    IEEE ACCESS, 2023, 11 : 66440 - 66455
  • [8] Rationality of reward sharing in multi-agent reinforcement learning
    Kazuteru Miyazaki
    Shigenobu Kobayashi
    New Generation Computing, 2001, 19 : 157 - 172
  • [9] Rationality of reward sharing in multi-agent reinforcement learning
    Miyazaki, K
    Kobayashi, S
    NEW GENERATION COMPUTING, 2001, 19 (02) : 157 - 172
  • [10] Individual Reward Assisted Multi-Agent Reinforcement Learning
    Wang, Li
    Zhang, Yupeng
    Hu, Yujing
    Wang, Weixun
    Zhang, Chongjie
    Gao, Yang
    Hao, Jianye
    Lv, Tangjie
    Fan, Changjie
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,