Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms

被引:2
|
作者
Trivedi, Prashant [1 ]
Hemachandra, Nandyala [1 ]
机构
[1] Indian Inst Technol, Ind Engn & Operat Res, Mumbai, Maharashtra, India
关键词
Natural Gradients; Actor-Critic Methods; Networked Agents; Traffic Network Control; Stochastic Approximations; Function Approximations; Fisher Information Matrix; Non-Convex Optimization; Quasi second-order methods; Local optima value comparison; Algorithms for better local minima; OPTIMIZATION; CONVERGENCE;
D O I
10.1007/s13235-022-00449-9
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Multi-agent actor-critic algorithms are an important part of the Reinforcement Learning (RL) paradigm. We propose three fully decentralized multi-agent natural actor-critic (MAN) algorithms in this work. The objective is to collectively find a joint policy that maximizes the average long-term return of these agents. In the absence of a central controller and to preserve privacy, agents communicate some information to their neighbors via a time-varying communication network. We prove convergence of all the three MAN algorithms to a globally asymptotically stable set of the ODE corresponding to actor update; these use linear function approximations. We show that the Kullback-Leibler divergence between policies of successive iterates is proportional to the objective function's gradient. We observe that the minimum singular value of the Fisher information matrix is well within the reciprocal of the policy parameter dimension. Using this, we theoretically show that the optimal value of the deterministic variant of the MAN algorithm at each iterate dominates that of the standard gradient-based multi-agent actor-critic (MAAC) algorithm. To our knowledge, it is the first such result in multi-agent reinforcement learning (MARL). To illustrate the usefulness of our proposed algorithms, we implement them on a bi-lane traffic network to reduce the average network congestion. We observe an almost 25% reduction in the average congestion in 2 MAN algorithms; the average congestion in another MAN algorithm is on par with the MAAC algorithm. We also consider a generic 15 agent MARL; the performance of the MAN algorithms is again as good as the MAAC algorithm.
引用
收藏
页码:25 / 55
页数:31
相关论文
共 50 条
  • [21] Actor-Attention-Critic for Multi-Agent Reinforcement Learning
    Iqbal, Shariq
    Sha, Fei
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [22] Natural actor-critic algorithms
    Bhatnagar, Shalabh
    Sutton, Richard S.
    Ghavamzadeh, Mohammad
    Lee, Mark
    [J]. AUTOMATICA, 2009, 45 (11) : 2471 - 2482
  • [23] Multi-agent off-policy actor-critic algorithm for distributed multi-task reinforcement learning
    Stankovic, Milos S.
    Beko, Marko
    Ilic, Nemanja
    Stankovic, Srdjan S.
    [J]. EUROPEAN JOURNAL OF CONTROL, 2023, 74
  • [24] B -Level Actor-Critic for Multi-Agent Coordination
    Zhang, Haifeng
    Chen, Weizhe
    Huang, Zeren
    Li, Minne
    Yang, Yaodong
    Zhang, Weinan
    Wang, Jun
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7325 - 7332
  • [25] Context-Aware Bayesian Network Actor-Critic Methods for Cooperative Multi-Agent Reinforcement Learning
    Chen, Dingyang
    Zhang, Qi
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [26] UAV Assisted Cooperative Caching on Network Edge Using Multi-Agent Actor-Critic Reinforcement Learning
    Araf, Sadman
    Saha, Adittya Soukarjya
    Kazi, Sadia Hamid
    Tran, Nguyen H. H.
    Alam, Md. Golam Rabiul
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (02) : 2322 - 2337
  • [27] Divergence-Regularized Multi-Agent Actor-Critic
    Su, Kefan
    Lu, Zongqing
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [28] AHAC: Actor Hierarchical Attention Critic for Multi-Agent Reinforcement Learning
    Wang, Yajie
    Shi, Dianxi
    Xue, Chao
    Jiang, Hao
    Wang, Gongju
    Gong, Peng
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3013 - 3020
  • [29] Multi-agent Gradient-Based Off-Policy Actor-Critic Algorithm for Distributed Reinforcement Learning
    Ren, Jineng
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [30] Bias in Natural Actor-Critic Algorithms
    Thomas, Philip S.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 1), 2014, 32