Optimistic sequential multi-agent reinforcement learning with motivational communication

被引:0
|
作者
Huang, Anqi [1 ]
Wang, Yongli [1 ]
Zhou, Xiaoliang [1 ]
Zou, Haochen [1 ]
Dong, Xu [1 ]
Che, Xun [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-agent reinforcement learning; Policy gradient; Motivational communication; Reinforcement learning; Multi-agent system;
D O I
10.1016/j.neunet.2024.106547
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Centralized Training with Decentralized Execution (CTDE) is a prevalent paradigm in the field of fully cooperative Multi-Agent Reinforcement Learning (MARL). Existing algorithms often encounter two major problems: independent strategies tend to underestimate the potential value of actions, leading to the convergence on sub-optimal Nash Equilibria (NE); some communication paradigms introduce added complexity to the learning process, complicating the focus on the essential elements of the messages. To address these challenges, we propose a novel method called O ptimistic S equential S oft Actor Critic with M otivational C ommunication (OSSMC). The key idea of OSSMC is to utilize a greedy-driven approach to explore the potential value of individual policies, named optimistic Q-values, which serve as an upper bound for the Q-value of the current policy. We then integrate a sequential update mechanism with optimistic Q-value for agents, aiming to ensure monotonic improvement in the joint policy optimization process. Moreover, we establish motivational communication modules for each agent to disseminate motivational messages to promote cooperative behaviors. Finally, we employ a value regularization strategy from the Soft Actor Critic (SAC) method to maximize entropy and improve exploration capabilities. The performance of OSSMC was rigorously evaluated against a series of challenging benchmark sets. Empirical results demonstrate that OSSMC not only surpasses current baseline algorithms but also exhibits a more rapid convergence rate.
引用
下载
收藏
页数:12
相关论文
共 50 条
  • [1] Optimistic Value Instructors for Cooperative Multi-Agent Reinforcement Learning
    Li, Chao
    Zhang, Yupeng
    Wang, Jianqi
    Hu, Yujing
    Dong, Shaokang
    Li, Wenbin
    Lv, Tangjie
    Fan, Changjie
    Gao, Yang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17453 - 17460
  • [2] Multi-Agent Reinforcement Learning With Distributed Targeted Multi-Agent Communication
    Xu, Chi
    Zhang, Hui
    Zhang, Ya
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 2915 - 2920
  • [3] Learning structured communication for multi-agent reinforcement learning
    Sheng, Junjie
    Wang, Xiangfeng
    Jin, Bo
    Yan, Junchi
    Li, Wenhao
    Chang, Tsung-Hui
    Wang, Jun
    Zha, Hongyuan
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2022, 36 (02)
  • [4] Learning structured communication for multi-agent reinforcement learning
    Junjie Sheng
    Xiangfeng Wang
    Bo Jin
    Junchi Yan
    Wenhao Li
    Tsung-Hui Chang
    Jun Wang
    Hongyuan Zha
    Autonomous Agents and Multi-Agent Systems, 2022, 36
  • [5] Multi-agent Reinforcement Learning in Sequential Social Dilemmas
    Leibo, Joel Z.
    Zambaldi, Vinicius
    Lanctot, Marc
    Marecki, Janusz
    Graepel, Thore
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 464 - 473
  • [6] Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
    Zhao, Xutong
    Pan, Yangchen
    Xiao, Chenjun
    Chandar, Sarath
    Rajendran, Janarthanan
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2529 - 2540
  • [7] Learning of Communication Codes in Multi-Agent Reinforcement Learning Problem
    Kasai, Tatsuya
    Tenmoto, Hiroshi
    Kamiya, Akimoto
    2008 IEEE CONFERENCE ON SOFT COMPUTING IN INDUSTRIAL APPLICATIONS SMCIA/08, 2009, : 1 - +
  • [8] Multi-agent reinforcement learning based on local communication
    Zhang, Wenxu
    Ma, Lei
    Li, Xiaonan
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 6): : 15357 - 15366
  • [9] Improving coordination with communication in multi-agent reinforcement learning
    Szer, D
    Charpillet, F
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 436 - 440
  • [10] Multi-Agent Deep Reinforcement Learning with Emergent Communication
    Simoes, David
    Lau, Nuno
    Reis, Luis Paulo
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,