Co-evolution of synchronization and cooperation with multi-agent Q-learning

被引:5
|
作者
Zhu, Peican [1 ]
Cao, Zhaoheng [2 ]
Liu, Chen [3 ]
Chu, Chen [4 ]
Wang, Zhen [5 ]
机构
[1] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[3] Northwestern Polytech Univ, Sch Ecol & Environm, Xian 710072, Peoples R China
[4] Yunnan Univ Finance & Econ, Sch Stat & Math, Kunming 650221, Peoples R China
[5] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
DILEMMA; REPUTATION; EVOLUTION; KURAMOTO; STRATEGY;
D O I
10.1063/5.0141824
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Cooperation is a widespread phenomenon in human society and plays a significant role in achieving synchronization of various systems. However, there has been limited progress in studying the co-evolution of synchronization and cooperation. In this manuscript, we investigate how reinforcement learning affects the evolution of synchronization and cooperation. Namely, the payoff of an agent depends not only on the cooperation dynamic but also on the synchronization dynamic. Agents have the option to either cooperate or defect. While cooperation promotes synchronization among agents, defection does not. We report that the dynamic feature, which indicates the action switching frequency of the agent during interactions, promotes synchronization. We also find that cooperation and synchronization are mutually reinforcing. Furthermore, we thoroughly analyze the potential reasons for synchronization promotion due to the dynamic feature from both macro- and microperspectives. Additionally, we conduct experiments to illustrate the differences in the synchronization-promoting effects of cooperation and dynamic features.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Multi-agent crowdsourcing model based on Q-learning
    Fang, Xin
    Guo, Yongan
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2019,
  • [22] Multi-Agent Q-Learning for Drone Base Stations
    Janji, Salim
    Kliks, Adrian
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB, 2023, : 261 - 266
  • [23] A Multi-Agent Model for the Co-Evolution of Ideas and Communities
    Ghanem, Amer G.
    Minai, Ali A.
    Uber, James G.
    [J]. 2010 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2010,
  • [24] Adaptive Co-evolution Model for Multi-agent System
    张向锋
    丁永生
    梁朝霞
    [J]. Journal of Donghua University(English Edition), 2010, 27 (02) : 123 - 126
  • [25] Multi-Agent Group Programming Based On Co-evolution
    Liu Wenjun
    Wang Tianjiang
    Liu Fang
    [J]. COMPUTER JOURNAL, 2009, 52 (08): : 910 - 921
  • [26] Cooperative behavior acquisition for multi-agent systems by Q-learning
    Xie, M. C.
    Tachibana, A.
    [J]. 2007 IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTATIONAL INTELLIGENCE, VOLS 1 AND 2, 2007, : 424 - +
  • [27] The acquisition of sociality by using Q-learning in a multi-agent environment
    Nagayuki, Yasuo
    [J]. PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 820 - 823
  • [28] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Leng, Lixiong
    Li, Jingchen
    Zhu, Jinhui
    Hwang, Kao-Shing
    Shi, Haobin
    [J]. INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (06) : 1669 - 1679
  • [29] Multi-agent Q-learning Based Navigation in an Unknown Environment
    Nath, Amar
    Niyogi, Rajdeep
    Singh, Tajinder
    Kumar, Virendra
    [J]. ADVANCED INFORMATION NETWORKING AND APPLICATIONS, AINA-2022, VOL 1, 2022, 449 : 330 - 340
  • [30] Q-Learning with Side Information in Multi-Agent Finite Games
    Sylvestre, Mathieu
    Pavel, Lacra
    [J]. 2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5032 - 5037