On the convergence of policy gradient methods to Nash equilibria in general stochastic games

被引:0
|
作者
Giannou, Angeliki [1 ]
Lotidis, Kyriakos [2 ]
Mertikopoulos, Panayotis [3 ,4 ]
Vlatakis-Gkaragkounis, Emmanouil V. [5 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
[2] Stanford Univ, Stanford, CA USA
[3] Univ Grenoble Alpes, CNRS, INRIA, Grenoble INP,LIG, F-38000 Grenoble, France
[4] Criteo AI Lab, Paris, France
[5] Univ Calif Berkeley, Berkeley, CA USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
关键词
LEVEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms - like policy gradient and its variants - are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an O(1/root n) distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] NASH EQUILIBRIA IN UNCONSTRAINED STOCHASTIC GAMES OF RESOURCE EXTRACTION
    Balbus, Lukasz
    Nowak, Andrzej S.
    INTERNATIONAL GAME THEORY REVIEW, 2008, 10 (01) : 25 - 35
  • [22] Distributed convergence to Nash equilibria in network and average aggregative games
    Parise, Francesca
    Grammatico, Sergio
    Gentile, Basilio
    Lygeros, John
    AUTOMATICA, 2020, 117 (117)
  • [23] Simple learning in weakly acyclic games and convergence to Nash equilibria
    Pal, Siddharth
    La, Richard J.
    2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 459 - 466
  • [24] Two-Timescale Algorithms for Learning Nash Equilibria in General-Sum Stochastic Games
    Prasad, H. L.
    Prashanth, L. A.
    Bhatnagar, Shalabh
    PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS (AAMAS'15), 2015, : 1371 - 1379
  • [25] Nash Equilibria in Normal Games via Optimization Methods
    Buttler, Jens
    Akchurina, Natalia
    2013 EUROPEAN CONTROL CONFERENCE (ECC), 2013, : 724 - 729
  • [26] Construction of Nash equilibria in symmetric stochastic games of capital accumulation
    Balbus, L
    Nowak, AS
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2004, 60 (02) : 267 - 277
  • [27] Pure Stationary Nash Equilibria for Discounted Stochastic Positional Games
    Lozovanu, Dmitrii
    Pickl, Stefan
    CONTRIBUTIONS TO GAME THEORY AND MANAGEMENT, VOL XII, 2019, 12 : 246 - 260
  • [28] Construction of Nash equilibria in symmetric stochastic games of capital accumulation
    Łukasz Balbus
    Andrzej S. Nowak
    Mathematical Methods of Operations Research, 2004, 60 : 267 - 277
  • [29] On Nash Equilibria for Stochastic Games and Determining the Optimal Strategies of the Players
    Lozovanu, Dmitrii
    Pickl, Stefan
    CONTRIBUTIONS TO GAME THEORY AND MANAGEMENT, VOL VIII, 2015, 8 : 187 - 198
  • [30] DYNAMIC STABILITY OF THE SET OF NASH EQUILIBRIA IN STABLE STOCHASTIC GAMES
    Murali, Divya
    Shaiju, A. j.
    JOURNAL OF DYNAMICS AND GAMES, 2023, 10 (03): : 270 - 286