Shaping Large Population Agent Behaviors Through Entropy-Regularized Mean-Field Games

被引:0
|
作者
Guan, Yue [1 ]
Zhou, Mi [2 ]
Pakniyat, Ali [3 ]
Tsiotras, Panagiotis [1 ]
机构
[1] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30332 USA
[2] Georgia Inst Technol, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
[3] Univ Alabama, Dept Mech Engn, Tuscaloosa, AL USA
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mean-field games (MFG) were introduced to efficiently analyze approximate Nash equilibria in large population settings. In this work, we consider entropy-regularized mean-field games with a finite state-action space in a discrete time setting. We show that entropy regularization provides the necessary regularity conditions, that are lacking in the standard finite mean field games. Such regularity conditions enable us to design fixed-point iteration algorithms to find the unique mean-field equilibrium (MFE). Furthermore, the reference policy used in the regularization provides an extra parameter, through which one can control the behavior of the population. We first consider a stochastic game with a large population of N homogeneous agents. We establish conditions for the existence of a Nash equilibrium in the limiting case as N tends to infinity, and we demonstrate that the Nash equilibrium for the infinite population case is also an epsilon-Nash equilibrium for the N-agent system, where the sub-optimality epsilon is of order O(1/root N). Finally, we verify the theoretical guarantees through a resource allocation example and demonstrate the efficacy of using a reference policy to control the behavior of a large population.
引用
收藏
页码:4429 / 4435
页数:7
相关论文
共 50 条
  • [1] Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning
    Cui, Kai
    Koeppl, Heinz
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [2] Learning Regularized Monotone Graphon Mean-Field Games
    Zhang, Fengzhuo
    Tan, Vincent Y. F.
    Wang, Zhaoran
    Yang, Zhuoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] Q-Learning in Regularized Mean-field Games
    Berkay Anahtarci
    Can Deha Kariksiz
    Naci Saldi
    Dynamic Games and Applications, 2023, 13 : 89 - 117
  • [4] Q-Learning in Regularized Mean-field Games
    Anahtarci, Berkay
    Kariksiz, Can Deha
    Saldi, Naci
    DYNAMIC GAMES AND APPLICATIONS, 2023, 13 (01) : 89 - 117
  • [5] Learning Regularized Graphon Mean-Field Games with Unknown Graphons
    Zhang, Fengzhuo
    Tan, Vincent Y. F.
    Wang, Zhaoran
    Yang, Zhuoran
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [6] Dynamic Population Games: A Tractable Intersection of Mean-Field Games and Population Games
    Elokda, Ezzat
    Bolognani, Saverio
    Censi, Andrea
    Dorfler, Florian
    Frazzoli, Emilio
    IEEE CONTROL SYSTEMS LETTERS, 2024, 8 : 1072 - 1077
  • [7] Mean-field games with logistic population dynamics
    Gomes, Diogo Aguiar
    Ribeiro, Ricardo de Lima
    2013 IEEE 52ND ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2013, : 2513 - 2518
  • [8] Robust Incentive Stackelberg Games With a Large Population for Stochastic Mean-Field Systems
    Mukaidani, Hiroaki
    Irie, Shunpei
    Xu, Hua
    Zhuang, Weihua
    IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 1934 - 1939
  • [9] Mean-field approximation for large-population beauty-contest games
    Seraj, Raihan
    Le Ny, Jerome
    Mahajan, Aditya
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 5233 - 5238
  • [10] CONVERGENCE OF LARGE POPULATION GAMES TO MEAN FIELD GAMES WITH INTERACTION THROUGH THE CONTROLS
    Lauriere, Mathieu
    Tangpi, Ludovic
    SIAM JOURNAL ON MATHEMATICAL ANALYSIS, 2022, 54 (03) : 3535 - 3574