Multi-agent algorithms for solving graphical games

被引:0
|
作者
Vickrey, D [1 ]
Koller, D [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Consider the problem of a group of agents trying to find a stable strategy profile for a joint interaction. A standard approach is to describe the situation as a single multi-player game and find an equilibrium strategy profile of that game. However, most algorithms for finding equilibria are computationally expensive; they are also centralized, requiring that all relevant payoff information be available to a single agent (or computer) who must determine the entire equilibrium profile. In this paper, we exploit two ideas to address these problems. We consider structured game representations, where the interaction between the agents is sparse, an assumption that holds in many real-world situations. We also consider the slightly relaxed task of finding an approximate equilibrium. We present two algorithms for finding approximate equilibria in these games, one based on a hill-climbing approach and one on constraint satisfaction. We show that these algorithms exploit the game structure to achieve faster computation. They are also inherently local, requiring only limited communication between directly interacting agents. They can thus be scaled to games involving large numbers of agents, provided the interaction between the agents is not too dense.
引用
收藏
页码:345 / 351
页数:7
相关论文
共 50 条
  • [31] Multi-Agent Discrete-Time Graphical Games: Interactive Nash Equilibrium and Value Iteration Solution
    Abouheaf, Mohammed
    Lewis, Frank
    Haesaert, Sofie
    Babuska, Robert
    Vamvoudakis, Kyriakos
    [J]. 2013 AMERICAN CONTROL CONFERENCE (ACC), 2013, : 4189 - 4195
  • [32] Optimal distributed synchronization control for continuous-time heterogeneous multi-agent differential graphical games
    Wei, Qinglai
    Liu, Derong
    Lewis, Frank L.
    [J]. INFORMATION SCIENCES, 2015, 317 : 96 - 113
  • [33] Finding Friend and Foe in Multi-Agent Games
    Serrino, Jack
    Kleiman-Weiner, Max
    Parkes, David C.
    Tenenbaum, Joshua B.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [34] Passivity, RL and Learning in Multi-Agent Games
    Pavel, Lacra
    [J]. 2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 4383 - 4383
  • [35] Multi-Agent Reinforcement Learning in Cournot Games
    Shi, Yuanyuan
    Zhang, Baosen
    [J]. 2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 3561 - 3566
  • [36] Research progress of multi-agent learning in games
    Luo J.
    Zhang W.
    Su J.
    Yuan W.
    Chen J.
    [J]. Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (05): : 1628 - 1655
  • [37] Conditional Imitation Learning for Multi-Agent Games
    Shih, Andy
    Ermon, Stefano
    Sadigh, Dorsa
    [J]. PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 166 - 175
  • [38] Exposing Transmitters in Mobile Multi-Agent Games
    Bessos, Mai Ben-Adar
    Birnbach, Simon
    Herzberg, Amir
    Martinovic, Ivan
    [J]. CPS-SPC'16: PROCEEDINGS OF THE 2ND ACM WORKSHOP ON CYBER-PHYSICAL SYSTEMS SECURITY & PRIVACY, 2016, : 125 - 136
  • [39] Diverse Generation for Multi-agent Sports Games
    Yeh, Raymond A.
    Schwing, Alexander G.
    Huang, Jonathan
    Murphy, Kevin
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4605 - 4614
  • [40] Learning automata based multi-agent system algorithms for finding optimal policies in Markov games
    Masoumi, Behrooz
    Meybodi, Mohammad Reza
    [J]. ASIAN JOURNAL OF CONTROL, 2012, 14 (01) : 137 - 152