Network Learning in Quadratic Games From Best-Response Dynamics

被引:0
|
作者
Ding, Kemi [1 ,2 ]
Chen, Yijun [3 ]
Wang, Lei [4 ]
Ren, Xiaoqiang [5 ]
Shi, Guodong [3 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen Key Lab Control Theory & Intelligent Syst, Shenzhen 518055, Peoples R China
[2] Southern Univ Sci & Technol, Sch Syst Design & Intelligent Mfg, Shenzhen 518055, Peoples R China
[3] Univ Sydney, Sch Aerosp Mech & Mechatron Engn, Australian Ctr Robot, Sydney, NSW 2004, Australia
[4] Zhejiang Univ, Coll Control Sci & Engn, Hangzhou 310027, Peoples R China
[5] Shanghai Univ, Sch Mechatron Engn & Automat, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Games; System identification; Nash equilibrium; Mathematical models; Social networking (online); Probabilistic logic; Organizations; Linear quadratic games; best-response dynamics; network learning; SOCIAL NETWORKS; SYSTEM-IDENTIFICATION; GRAPHICAL GAMES;
D O I
10.1109/TNET.2024.3404509
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
We investigate the capacity of an adversary to learn the underlying interaction network through repeated best response actions in linear-quadratic games. The adversary strategically perturbs the decisions of a set of action-compromised players and observes the sequential decisions of a set of action-leaked players. The central question pertains to whether such an adversary can fully reconstruct or effectively estimate the underlying interaction structure among the players. To begin with, we establish a series of results that characterize the learnability of the interaction graph from the adversary's perspective by drawing connections between this network learning problem in games and classical system identification theory. Subsequently, taking into account the inherent stability and sparsity constraints inherent in the network interaction structure, we propose a stable and sparse system identification framework for learning the interaction graph based on complete player action observations. Moreover, we present a stable and sparse subspace identification framework for learning the interaction graph when only partially observed player actions are available. Finally, we demonstrate the efficacy of the proposed learning frameworks through numerical examples.
引用
收藏
页码:3669 / 3684
页数:16
相关论文
共 50 条
  • [1] Network Learning from Best-Response Dynamics in LQ Games
    Chen, Yijun
    Ding, Kemi
    Shi, Guodong
    2023 AMERICAN CONTROL CONFERENCE, ACC, 2023, : 1680 - 1685
  • [2] Best-response dynamics in directed network games 
    Bayer, Peter
    Kozics, Gyorgy
    Szoke, Nora Gabriella
    JOURNAL OF ECONOMIC THEORY, 2023, 213
  • [3] ON BEST-RESPONSE DYNAMICS IN POTENTIAL GAMES
    Swenson, Brian
    Murray, Ryan
    Kar, Soummya
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2018, 56 (04) : 2734 - 2767
  • [4] Best-Response Dynamics for Evolutionary Stochastic Games
    Murali, Divya
    Shaiju, A. J.
    INTERNATIONAL GAME THEORY REVIEW, 2023, 25 (04)
  • [5] Evolutionary games on the lattice: best-response dynamics
    Evilsizor, Stephen
    Lanchier, Nicolas
    ELECTRONIC JOURNAL OF PROBABILITY, 2014, 19
  • [6] Active Learning and Best-Response Dynamics
    Balcan, Maria-Florina
    Berlind, Christopher
    Blum, Avrim
    Cohen, Emma
    Patnaik, Kaushik
    Song, Le
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [7] The Frequency of Convergent Games under Best-Response Dynamics
    Wiese, Samuel C.
    Heinrich, Torsten
    DYNAMIC GAMES AND APPLICATIONS, 2022, 12 (02) : 689 - 700
  • [8] Approximate Best-Response Dynamics in Random Interference Games
    Bistritz, Ilai
    Leshem, Amir
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2018, 63 (06) : 1549 - 1562
  • [9] Best-response potential games
    Voorneveld, M
    ECONOMICS LETTERS, 2000, 66 (03) : 289 - 295
  • [10] Convergence of Approximate Best-Response Dynamics in Interference Games
    Bistritz, Ilai
    Leshem, Amir
    2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 4433 - 4438