On the convergence of policy gradient methods to Nash equilibria in general stochastic games

被引:0
|
作者
Giannou, Angeliki [1 ]
Lotidis, Kyriakos [2 ]
Mertikopoulos, Panayotis [3 ,4 ]
Vlatakis-Gkaragkounis, Emmanouil V. [5 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
[2] Stanford Univ, Stanford, CA USA
[3] Univ Grenoble Alpes, CNRS, INRIA, Grenoble INP,LIG, F-38000 Grenoble, France
[4] Criteo AI Lab, Paris, France
[5] Univ Calif Berkeley, Berkeley, CA USA
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms - like policy gradient and its variants - are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an O(1/root n) distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Nash equilibria of quasisupermodular games
    Yu, Lu
    OPERATIONS RESEARCH LETTERS, 2025, 58
  • [42] Nash equilibria for games in capacities
    Roman Kozhan
    Michael Zarichnyi
    Economic Theory, 2008, 35 : 321 - 331
  • [43] Nash equilibria in random games
    Barany, Irnre
    Vempala, Santosh
    Vetta, Adrian
    RANDOM STRUCTURES & ALGORITHMS, 2007, 31 (04) : 391 - 405
  • [44] Nash equilibria for games in capacities
    Kozhan, Roman
    Zarichnyi, Michael
    ECONOMIC THEORY, 2008, 35 (02) : 321 - 331
  • [45] NASH EQUILIBRIA IN QUANTUM GAMES
    Landsburg, Steven E.
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY, 2011, 139 (12) : 4423 - 4434
  • [46] NASH EQUILIBRIA IN MULTINEURON GAMES
    BARAN, RH
    COUGHLIN, JP
    MATHEMATICAL AND COMPUTER MODELLING, 1990, 14 : 334 - 335
  • [47] Nash equilibria in random games
    Bárány, I
    Vempala, S
    Vetta, A
    46TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, PROCEEDINGS, 2005, : 123 - 131
  • [48] Population Games With Erlang Clocks: Convergence to Nash Equilibria For Pairwise Comparison Dynamics
    Kara, Semih
    Martins, Nuno C.
    Arcak, Murat
    2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 7688 - 7695
  • [49] On the Convergence Rates of Policy Gradient Methods
    Xiao, Lin
    Journal of Machine Learning Research, 2022, 23
  • [50] On the Convergence Rates of Policy Gradient Methods
    Xiao, Lin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23