Co-evolution of synchronization and cooperation with multi-agent Q-learning

被引:5
|
作者
Zhu, Peican [1 ]
Cao, Zhaoheng [2 ]
Liu, Chen [3 ]
Chu, Chen [4 ]
Wang, Zhen [5 ]
机构
[1] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[3] Northwestern Polytech Univ, Sch Ecol & Environm, Xian 710072, Peoples R China
[4] Yunnan Univ Finance & Econ, Sch Stat & Math, Kunming 650221, Peoples R China
[5] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
DILEMMA; REPUTATION; EVOLUTION; KURAMOTO; STRATEGY;
D O I
10.1063/5.0141824
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Cooperation is a widespread phenomenon in human society and plays a significant role in achieving synchronization of various systems. However, there has been limited progress in studying the co-evolution of synchronization and cooperation. In this manuscript, we investigate how reinforcement learning affects the evolution of synchronization and cooperation. Namely, the payoff of an agent depends not only on the cooperation dynamic but also on the synchronization dynamic. Agents have the option to either cooperate or defect. While cooperation promotes synchronization among agents, defection does not. We report that the dynamic feature, which indicates the action switching frequency of the agent during interactions, promotes synchronization. We also find that cooperation and synchronization are mutually reinforcing. Furthermore, we thoroughly analyze the potential reasons for synchronization promotion due to the dynamic feature from both macro- and microperspectives. Additionally, we conduct experiments to illustrate the differences in the synchronization-promoting effects of cooperation and dynamic features.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Q-learning in Multi-Agent Cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    [J]. 2008 IEEE WORKSHOP ON ADVANCED ROBOTICS AND ITS SOCIAL IMPACTS, 2008, : 239 - 244
  • [2] Continuous Q-Learning for Multi-Agent Cooperation
    Hwang, Kao-Shing
    Jiang, Wei-Cheng
    Lin, Yu-Hong
    Lai, Li-Hsin
    [J]. CYBERNETICS AND SYSTEMS, 2012, 43 (03) : 227 - 256
  • [3] Q-learning with FCMAC in multi-agent cooperation
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Lin, Tzung-Feng
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 599 - 606
  • [4] CONTINUOUS ACTION GENERATION OF Q-LEARNING IN MULTI-AGENT COOPERATION
    Hwang, Kao-Shing
    Chen, Yu-Jen
    Jiang, Wei-Cheng
    Lin, Tzung-Feng
    [J]. ASIAN JOURNAL OF CONTROL, 2013, 15 (04) : 1011 - 1020
  • [5] Real-Valued Q-learning in Multi-agent Cooperation
    Hwang, Kao-Shing
    Lo, Chia-Yue
    Chen, Kim-Joan
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 395 - 400
  • [6] Modular Q-learning based multi-agent cooperation for robot soccer
    Park, KH
    Kim, YJ
    Kim, JH
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2001, 35 (02) : 109 - 122
  • [7] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. Journal of Artificial Intelligence Research, 2022, 74 : 1 - 74
  • [8] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 6884 - 6889
  • [9] Multi-Agent Advisor Q-Learning
    Subramanian, Sriram Ganapathi
    Taylor, Matthew E.
    Larson, Kate
    Crowley, Mark
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2022, 74 : 1 - 74
  • [10] Multi-Agent Cooperation Q-Learning Algorithm Based on Constrained Markov Game
    Ge, Yangyang
    Zhu, Fei
    Huang, Wei
    Zhao, Peiyao
    Liu, Quan
    [J]. COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2020, 17 (02) : 647 - 664