No-regret learning for repeated non-cooperative games with lossy bandits

被引:2
|
作者
Liu, Wenting [1 ]
Lei, Jinlong [1 ,2 ]
Yi, Peng [1 ,2 ]
Hong, Yiguang [1 ,2 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[2] Tongji Univ, Shanghai Res Inst Intelligent Autonomous Syst, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Online learning; No -regret learning; Repeated games; Lossy bandits; NASH EQUILIBRIUM SEEKING; ONLINE CONVEX-OPTIMIZATION; MIRROR DESCENT; INFORMATION; ALGORITHMS;
D O I
10.1016/j.automatica.2023.111455
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper considers no-regret learning for repeated continuous-kernel games with lossy bandit feed-back. Since it is difficult to give an explicit model of the utility functions in dynamic environments, the players' actions can only be learned with bandit feedback. Moreover, due to unreliable communication channels or privacy protection, the bandit feedback may be lost or dropped at random. Therefore, we study the asynchronous online learning strategy of the players to adaptively adjust the next actions for minimizing the long-term regret loss. The paper provides a novel no-regret learning algorithm, called Online Gradient Descent with lossy bandits (OGD-lb). We first give the regret analysis for concave games with differentiable and Lipschitz utilities. Then we show that the action profile converges to a Nash equilibrium with probability 1 when the game is also strictly monotone. We further provide the (root i k-1/3) mean-squared convergence rate O Np-2 when the game is beta-strongly monotone, where N denotes the number of players and pi is the update probability. In addition, we extend the algorithm to the case when the loss probability of the bandit feedback is unknown, and prove its almost sure convergence to Nash equilibrium for strictly monotone games. Finally, we take the resource management in fog computing as an application example, and carry out numerical experiments to empirically demonstrate the algorithm performance. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] No-regret learning for repeated concave games with lossy bandits
    Liu, Wenting
    Lei, Jinlong
    Yi, Peng
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 936 - 941
  • [2] No-Regret Learning in Bayesian Games
    Hartline, Jason
    Syrgkanis, Vasilis
    Tardos, Eva
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [3] Limits and limitations of no-regret learning in games
    Monnot, Barnabe
    Piliouras, Georgios
    [J]. KNOWLEDGE ENGINEERING REVIEW, 2017, 32
  • [4] No-Regret Learning in Dynamic Stackelberg Games
    Lauffer, Niklas
    Ghasemi, Mahsa
    Hashemi, Abolfazl
    Savas, Yagiz
    Topcu, Ufuk
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (03) : 1418 - 1431
  • [5] Doubly Optimal No-Regret Learning in Monotone Games
    Cai, Yang
    Zheng, Weiqiang
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [6] No-Regret Learning in Unknown Games with Correlated Payoffs
    Sessa, Pier Giuseppe
    Bogunovic, Ilija
    Kamgarpour, Maryam
    Krause, Andreas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Near-Optimal No-Regret Learning in General Games
    Daskalakis, Constantinos
    Fishelson, Maxwell
    Golowich, Noah
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Optimal No-Regret Learning in Repeated First-Price Auctions
    Han, Yanjun
    Weissman, Tsachy
    Zhou, Zhengyuan
    [J]. OPERATIONS RESEARCH, 2024,
  • [9] Memory-Constrained No-Regret Learning in Adversarial Multi-Armed Bandits
    Xu, Xiao
    Zhao, Qing
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 2371 - 2382
  • [10] Risk-Averse No-Regret Learning in Online Convex Games
    Wang, Zifan
    Shen, Yi
    Zavlanos, Michael M.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,