On the convergence of policy gradient methods to Nash equilibria in general stochastic games

被引:0
|
作者
Giannou, Angeliki [1 ]
Lotidis, Kyriakos [2 ]
Mertikopoulos, Panayotis [3 ,4 ]
Vlatakis-Gkaragkounis, Emmanouil V. [5 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
[2] Stanford Univ, Stanford, CA USA
[3] Univ Grenoble Alpes, CNRS, INRIA, Grenoble INP,LIG, F-38000 Grenoble, France
[4] Criteo AI Lab, Paris, France
[5] Univ Calif Berkeley, Berkeley, CA USA
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms - like policy gradient and its variants - are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an O(1/root n) distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Convergence of Policy Gradient Methods for Nash Equilibria in General-sum Stochastic Games
    Chen, Yan
    Li, Tao
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 3435 - 3440
  • [2] On Nash equilibria in stochastic games
    Chatterjee, K
    Majumdar, R
    Jurdzinski, M
    [J]. COMPUTER SCIENCE LOGIC, PROCEEDINGS, 2004, 3210 : 26 - 40
  • [3] On Pure Nash Equilibria in Stochastic Games
    Das, Ankush
    Krishna, Shankara Narayanan
    Manasa, Lakshmi
    Trivedi, Ashutosh
    Wojtczak, Dominik
    [J]. THEORY AND APPLICATIONS OF MODELS OF COMPUTATION (TAMC 2015), 2015, 9076 : 359 - 371
  • [4] General-sum stochastic games: Verifiability conditions for Nash equilibria
    Prasad, H. L.
    Bhatnagar, S.
    [J]. AUTOMATICA, 2012, 48 (11) : 2923 - 2930
  • [5] Convergence to Approximate Nash Equilibria in Congestion Games
    Chien, Steve
    Sinclair, Alistair
    [J]. PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2007, : 169 - 178
  • [6] Convergence to approximate Nash equilibria in congestion games
    Chien, Steve
    Sinclair, Alistair
    [J]. GAMES AND ECONOMIC BEHAVIOR, 2011, 71 (02) : 315 - 327
  • [7] On Nash equilibria in stochastic games of capital accumulation
    Nowak, AS
    Szajowski, P
    [J]. GAME THEORY AND APPLICATIONS, VOL IX, 2003, : 113 - 124
  • [8] Nash Equilibria Conditions for Stochastic Positional Games
    Lozoyanu, Dmitrii
    Pick, Stefan
    [J]. CONTRIBUTIONS TO GAME THEORY AND MANAGEMENT, VOL VII, 2014, 7 : 201 - 213
  • [9] Existence of nash equilibria for constrained stochastic games
    Alvarez-Mena, Jorge
    Hernandez-Lerma, Onesimo
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2006, 63 (02) : 261 - 285
  • [10] THE COMPLEXITY OF NASH EQUILIBRIA IN STOCHASTIC MULTIPLAYER GAMES
    Ummels, Michael
    Wojtczak, Dominik
    [J]. LOGICAL METHODS IN COMPUTER SCIENCE, 2011, 7 (03)