Fast Convergence of Regularized Learning in Games

被引:0
|
作者
Syrgkanis, Vasilis [1 ]
Agarwal, Alekh [1 ]
Luo, Haipeng [2 ]
Schapire, Robert E. [1 ]
机构
[1] Microsoft Res, New York, NY 10011 USA
[2] Princeton Univ, Princeton, NJ 08544 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at O(T-3/4), while the sum of utilities converges to an approximate optimum at O(T-1)-an improvement upon the worst case O(T-1/2) rates. We show a black-box reduction for any algorithm in the class to achieve (O) over tilde (T-1/2) rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan [17] and Daskalakis et al. [4], who only analyzed two-player zero-sum games for specific algorithms.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Fast Convergence of Online Pairwise Learning Algorithms
    Boissier, Martin
    Lyu, Siwei
    Ying, Yiming
    Zhou, Ding-Xuan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 204 - 212
  • [42] Fast and strong convergence of online learning algorithms
    Zheng-Chu Guo
    Lei Shi
    Advances in Computational Mathematics, 2019, 45 : 2745 - 2770
  • [43] Convergence of learning behaviors in games of Generation Companies in electricity market
    Zeng, Liang
    Qi, Huan
    Chen, Yingchun
    2008 THIRD INTERNATIONAL CONFERENCE ON ELECTRIC UTILITY DEREGULATION AND RESTRUCTURING AND POWER TECHNOLOGIES, VOLS 1-6, 2008, : 402 - 407
  • [44] THE EXPONENTIAL CONVERGENCE OF BAYESIAN LEARNING IN NORMAL-FORM GAMES
    JORDAN, JS
    GAMES AND ECONOMIC BEHAVIOR, 1992, 4 (02) : 202 - 217
  • [45] Convergence Analysis of Gradient-Based Learning in Continuous Games
    Chasnov, Benjamin
    Ratliff, Lillian
    Mazumdar, Eric
    Burden, Samuel
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 935 - 944
  • [46] The rate of convergence for regularized solutions
    Morozov, VA
    NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION, 1998, 19 (3-4) : 345 - 352
  • [47] Simple learning in weakly acyclic games and convergence to Nash equilibria
    Pal, Siddharth
    La, Richard J.
    2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 459 - 466
  • [48] Proving Convergence of Log-Linear Learning in Potential Games
    Tatarenko, Tatiana
    2014 AMERICAN CONTROL CONFERENCE (ACC), 2014, : 972 - 977
  • [49] POLICY MIRROR DESCENT FOR REGULARIZED REINFORCEMENT LEARNING: A GENERALIZED FRAMEWORK WITH LINEAR CONVERGENCE
    Zhan, Wenhao
    Cen, Shicong
    Huang, Baihe
    Chen, Yuxin
    Lee, Jason D.
    Chi, Yuejie
    SIAM JOURNAL ON OPTIMIZATION, 2023, 33 (02) : 1061 - 1091
  • [50] Accelerated Log-Regularized Convolutional Transform Learning and its Convergence Guarantee
    Li, Zhenni
    Zhao, Haoli
    Guo, Yongcheng
    Yang, Zuyuan
    Xie, Shengli
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (10) : 10785 - 10799