Mutli-agent consensus under communication failure using Actor-Critic Reinforcement Learning

被引:0
|
作者
Kandath, Harikumar [1 ]
Senthilnath, J. [2 ]
Sundaram, Suresh [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] ASTAR, Inst Infocomm Res, Singapore 138632, Singapore
关键词
Actor; Consensus; Critic; Neural network; Reinforcement learning; MULTIAGENT SYSTEMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of achieving multi-agent consensus under sudden total communication failure. The agents are assumed to be moving along the periphery of a circle. The proposed solution uses the actor-critic reinforcement learning method to achieve consensus, when there is no communication between the agents. A performance index is defined that take into consider the difference in angular position between the neighbouring agents. The actions of each agent while achieving consensus with full communication is learned by an actor neural network, while the critic neural network learns to predict the performance index. The proposed solution is validated by a numerical simulation with live agents moving along the periphery of a circle.
引用
收藏
页码:1461 / 1465
页数:5
相关论文
共 50 条
  • [1] A multi-agent reinforcement learning using Actor-Critic methods
    Li, Chun-Gui
    Wang, Meng
    Yuan, Qing-Neng
    PROCEEDINGS OF 2008 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2008, : 878 - 882
  • [2] Multi-Agent Actor-Critic Multitask Reinforcement Learning based on GTD(1) with Consensus
    Stankovic, Milo S. S.
    Beko, Marko
    Ilic, Nemanja
    Stankovic, Srdjan S.
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 4591 - 4596
  • [3] Anomaly Detection Under Controlled Sensing Using Actor-Critic Reinforcement Learning
    Joseph, Geethu
    Gursoy, M. Cenk
    Varshney, Pramod K.
    PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020), 2020,
  • [4] A Communication-Efficient Multi-Agent Actor-Critic Algorithm for Distributed Reinforcement Learning
    Lin, Yixuan
    Zhang, Kaiqing
    Yang, Zhuoran
    Wang, Zhaoran
    Basar, Tamer
    Sandhu, Romeil
    Liu, Ji
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5562 - 5567
  • [5] Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning
    Diddigi, Raghuram Bharadwaj
    Reddy, D. Sai Koti
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1931 - 1933
  • [6] Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms
    Prashant Trivedi
    Nandyala Hemachandra
    Dynamic Games and Applications, 2023, 13 : 25 - 55
  • [7] Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
    Christianos, Filippos
    Schafer, Lukas
    Albrecht, Stefano V.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Dynamic Pricing Based on Demand Response Using Actor-Critic Agent Reinforcement Learning
    Ismail, Ahmed
    Baysal, Mustafa
    ENERGIES, 2023, 16 (14)
  • [9] Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms
    Trivedi, Prashant
    Hemachandra, Nandyala
    DYNAMIC GAMES AND APPLICATIONS, 2023, 13 (01) : 25 - 55
  • [10] Distributed Multi-Agent Reinforcement Learning by Actor-Critic Method
    Heredia, Paulo C.
    Mou, Shaoshuai
    IFAC PAPERSONLINE, 2019, 52 (20): : 363 - 368