Regional Cooperative Multi-agent Q-learning Based on Potential Field

被引:3
|
作者
Liu, Liang [1 ]
Li, Longshu [1 ]
机构
[1] Anhui Univ, Key Lab Intelligent Comp & Signal Proc, Hefei 230039, Peoples R China
关键词
D O I
10.1109/ICNC.2008.173
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
More and more Artificial Intelligence researchers focused on the reinforcement learning(RL)-based multi-agent system(MAS). Multi-agent learning problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning, However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. In this paper we investigate a regional cooperative of the Q-function based on potential field by only considering the joint actions in those states in which coordination is actually required. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We have performed experiments in RoboCup simulation-2D which is the ideal testing platform of Multi-agent systems and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results.
引用
下载
收藏
页码:535 / 539
页数:5
相关论文
共 50 条
  • [31] Multi-Agent Q-Learning for Drone Base Stations
    Janji, Salim
    Kliks, Adrian
    2023 19TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB, 2023, : 261 - 266
  • [32] Cooperative Output Regulation By Q-learning For Discrete Multi-agent Systems In Finite-time
    Wei, Wenjun
    Tang, Jingyuan
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2022, 26 (06): : 853 - 864
  • [33] Q-learning Algorithm Based Multi-Agent Coordinated Control Method for Microgrids
    Xi, Yuanyuan
    Chang, Liuchen
    Mao, Meiqin
    Jin, Peng
    Hatziargyriou, Nikos
    Xu, Haibo
    2015 9TH INTERNATIONAL CONFERENCE ON POWER ELECTRONICS AND ECCE ASIA (ICPE-ECCE ASIA), 2015, : 1497 - 1504
  • [34] Q-Learning based Protection Scheme for Microgrid using Multi-Agent System
    Satuyeva, Botazhan
    Sultankulov, Bekbol
    Nunna, H. S. V. S. Kumar
    Kalakova, Aidana
    Doolla, Suryanarayana
    2019 2ND INTERNATIONAL CONFERENCE ON SMART ENERGY SYSTEMS AND TECHNOLOGIES (SEST 2019), 2019,
  • [35] Consensus of discrete-time multi-agent system based on Q-learning
    Zhu Z.-B.
    Wang F.-Y.
    Yin Y.-H.
    Liu Z.-X.
    Chen Z.-Q.
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2021, 38 (07): : 997 - 1005
  • [36] Multi-Agent Cooperation Q-Learning Algorithm Based on Constrained Markov Game
    Ge, Yangyang
    Zhu, Fei
    Huang, Wei
    Zhao, Peiyao
    Liu, Quan
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2020, 17 (02) : 647 - 664
  • [37] The acquisition of sociality by using Q-learning in a multi-agent environment
    Nagayuki, Yasuo
    PROCEEDINGS OF THE SIXTEENTH INTERNATIONAL SYMPOSIUM ON ARTIFICIAL LIFE AND ROBOTICS (AROB 16TH '11), 2011, : 820 - 823
  • [38] Q-Learning with Side Information in Multi-Agent Finite Games
    Sylvestre, Mathieu
    Pavel, Lacra
    2019 IEEE 58TH CONFERENCE ON DECISION AND CONTROL (CDC), 2019, : 5032 - 5037
  • [39] Multi-Agent Reward-Iteration Fuzzy Q-Learning
    Leng, Lixiong
    Li, Jingchen
    Zhu, Jinhui
    Hwang, Kao-Shing
    Shi, Haobin
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (06) : 1669 - 1679
  • [40] Multi-Agent Q-Learning with Joint State Value Approximation
    Chen Gang
    Cao Weihua
    Chen Xin
    Wu Min
    2011 30TH CHINESE CONTROL CONFERENCE (CCC), 2011, : 4878 - 4882