The Implementation of Asynchronous Advantage Actor-Critic with Stigmergy in Network-assisted Multi-agent System

被引:0
|
作者
Chen, Kun [1 ]
Li, Rongpeng [1 ]
Zhao, Zhifeng [2 ]
Zhang, Honggang [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Zhejiang Lab, Hangzhou, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
multi-agent system; stigmergy mechanism; digital pheromones; deep reinforcement learning; KHEPERA IV robots;
D O I
10.1109/wcsp49889.2020.9299839
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-agent system (MAS) needs to mobilize multiple simple agents to complete complex tasks. However, it is difficult to coherently coordinate distributed agents by means of limited local information. In this paper, we propose a decentralized collaboration method named as "stigmergy" in network-assisted MAS, by exploiting digital pheromones (DP) as an indirect medium of communication and utilizing deep reinforcement learning (DRL) on top. Correspondingly, we implement an experimental platform, where KHEPERA IV robots form targeted specific shapes in a decentralized manner. Experimental results demonstrate the effectiveness and efficiency of the proposed method. Our platform could be conveniently extended to investigate the impact of network factors (e.g., latency, data rate, etc) on the level of collective intelligence.
引用
收藏
页码:1082 / 1087
页数:6
相关论文
共 50 条
  • [1] A New Advantage Actor-Critic Algorithm For Multi-Agent Environments
    Paczolay, Gabor
    Harmati, Istvan
    [J]. 2020 23RD IEEE INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS (ISMCR), 2020,
  • [2] Multi-Agent Actor-Critic with Hierarchical Graph Attention Network
    Ryu, Heechang
    Shin, Hayong
    Park, Jinkyoo
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7236 - 7243
  • [3] Local Advantage Actor-Critic for Robust Multi-Agent Deep Reinforcement Learning
    Xiao, Yuchen
    Lyu, Xueguang
    Amato, Christopher
    [J]. 2021 INTERNATIONAL SYMPOSIUM ON MULTI-ROBOT AND MULTI-AGENT SYSTEMS (MRS), 2021, : 155 - 163
  • [4] Distributed Multi-Agent Approach for Achieving Energy Efficiency and Computational Offloading in MECNs Using Asynchronous Advantage Actor-Critic
    Khan, Israr
    Raza, Salman
    Khan, Razaullah
    Rehman, Waheed ur
    Rahman, G. M. Shafiqur
    Tao, Xiaofeng
    [J]. ELECTRONICS, 2023, 12 (22)
  • [5] B -Level Actor-Critic for Multi-Agent Coordination
    Zhang, Haifeng
    Chen, Weizhe
    Huang, Zeren
    Li, Minne
    Yang, Yaodong
    Zhang, Weinan
    Wang, Jun
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7325 - 7332
  • [6] Divergence-Regularized Multi-Agent Actor-Critic
    Su, Kefan
    Lu, Zongqing
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] UAV Assisted Cooperative Caching on Network Edge Using Multi-Agent Actor-Critic Reinforcement Learning
    Araf, Sadman
    Saha, Adittya Soukarjya
    Kazi, Sadia Hamid
    Tran, Nguyen H. H.
    Alam, Md. Golam Rabiul
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (02) : 2322 - 2337
  • [8] Asynchronous Advantage Actor-Critic with Double Attention Mechanisms
    Ling, Xing-Hong
    Li, Jie
    Zhu, Fei
    Liu, Quan
    Fu, Yu-Chen
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2020, 43 (01): : 93 - 106
  • [9] Actor-Critic Algorithms for Constrained Multi-agent Reinforcement Learning
    Diddigi, Raghuram Bharadwaj
    Reddy, D. Sai Koti
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    [J]. AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1931 - 1933
  • [10] Multi-Agent Natural Actor-Critic Reinforcement Learning Algorithms
    Prashant Trivedi
    Nandyala Hemachandra
    [J]. Dynamic Games and Applications, 2023, 13 : 25 - 55