A Decentralized Communication Framework Based on Dual-Level Recurrence for Multiagent Reinforcement Learning

被引:0
|
作者
Li, Xuesi [1 ]
Li, Jingchen [1 ]
Shi, Haobin [1 ]
Hwang, Kao-Shing [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Xian 710129, Shaanxi, Peoples R China
[2] Natl Sun Yat Sen Univ, Dept Elect Engn, Kaohsiung 804, Taiwan
基金
中国国家自然科学基金;
关键词
Reinforcement learning; Logic gates; Training; Adaptation models; Multi-agent systems; Task analysis; Decision making; Gated recurrent network; multiagent reinforcement learning; multiagent system;
D O I
10.1109/TCDS.2023.3281878
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Designing communication channels for multiagent is a feasible method to conduct decentralized learning, especially in partially observable environments or large-scale multiagent systems. In this work, a communication model with dual-level recurrence is developed to provide a more efficient communication mechanism for the multiagent reinforcement learning field. The communications are conducted by a gated-attention-based recurrent network, in which the historical states are taken into account and regarded as the second-level recurrence. We separate communication messages from memories in the recurrent model so that the proposed communication flow can adapt changeable communication objects in the case of limited communication, and the communication results are fair to every agent. We provide a sufficient discussion about our method in both partially observable and fully observable environments. The results of several experiments suggest our method outperforms the existing decentralized communication frameworks and the corresponding centralized training method.
引用
收藏
页码:640 / 649
页数:10
相关论文
共 50 条
  • [41] RescueNet: Reinforcement-learning-based communication framework for emergency networking
    Lee, Eun Kyung
    Viswanathan, Hariharasudhan
    Pompili, Dario
    COMPUTER NETWORKS, 2016, 98 : 14 - 28
  • [42] Finite-Sample Analysis for Decentralized Batch Multiagent Reinforcement Learning With Networked Agents
    Zhang, Kaiqing
    Yang, Zhuoran
    Liu, Han
    Zhang, Tong
    Basar, Tamer
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (12) : 5925 - 5940
  • [43] Dual-level contrastive learning for unsupervised person re-identification
    Zhao, Yu
    Shu, Qiaoyuan
    Shi, Xi
    IMAGE AND VISION COMPUTING, 2023, 129
  • [44] DualMatch: Robust Semi-supervised Learning with Dual-Level Interaction
    Wang, Cong
    Cao, Xiaofeng
    Guo, Lanzhe
    Shi, Zenglin
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 102 - 119
  • [45] Dual-Level Key Management for secure grid communication in dynamic and hierarchical groups
    Zou, Xukai
    Dai, Yuan-Shun
    Ran, Xiang
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2007, 23 (06): : 776 - 786
  • [46] SATF: A Scalable Attentive Transfer Framework for Efficient Multiagent Reinforcement Learning
    Chen, Bin
    Cao, Zehong
    Bai, Quan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 15
  • [47] Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement Learning
    Tabas, Daniel
    Zamzam, Ahmed S.
    Zhang, Baosen
    LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
  • [48] Optimal tracking agent: a new framework of reinforcement learning for multiagent systems
    Cao, Weihua
    Chen, Gang
    Chen, Xin
    Wu, Min
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2013, 25 (14): : 2002 - 2015
  • [49] A Dual-Level Adaptation Framework for Multichannel Cross-Condition Fault Diagnosis
    Jiang, Huiming
    Zhu, Haifeng
    Yuan, Jing
    Zhao, Qian
    Chen, Jin
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [50] Decentralized Reinforcement Learning Based MAC Optimization
    Nisioti, Eleni
    Thomos, Nikolaos
    2018 IEEE 29TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2018,