Robustness and Sample Complexity of Model-Based MARL for General-Sum Markov Games

被引:0
|
作者
Jayakumar Subramanian
Amit Sinha
Aditya Mahajan
机构
[1] Adobe Inc.,Media and Data Science Research Lab, Digital Experience Cloud
[2] McGill University,Department of Electrical and Computer Engineering
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Multi-agent reinforcement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibria in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model-based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of sample complexity for model-based MARL algorithms in general-sum Markov games. We show two results. We first use Hoeffding inequality-based bounds to show that O~((1-γ)-4α-2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{{\mathcal {O}}}( (1-\gamma )^{-4} \alpha ^{-2})$$\end{document} samples per state–action pair are sufficient to obtain a α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}-approximate Markov perfect equilibrium with high probability, where γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma $$\end{document} is the discount factor, and the O~(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{{\mathcal {O}}}(\cdot )$$\end{document} notation hides logarithmic terms. We then use Bernstein inequality-based bounds to show that O~((1-γ)-1α-2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{{\mathcal {O}}}( (1-\gamma )^{-1} \alpha ^{-2} )$$\end{document} samples are sufficient. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example.
引用
收藏
页码:56 / 88
页数:32
相关论文
共 50 条
  • [1] Robustness and Sample Complexity of Model-Based MARL for General-Sum Markov Games
    Subramanian, Jayakumar
    Sinha, Amit
    Mahajan, Aditya
    [J]. DYNAMIC GAMES AND APPLICATIONS, 2023, 13 (01) : 56 - 88
  • [2] Robustness of Markov perfect equilibrium to model approximations in general-sum dynamic games
    Subramanian, Jayakumar
    Sinha, Amit
    Mahajan, Aditya
    [J]. 2021 SEVENTH INDIAN CONTROL CONFERENCE (ICC), 2021, : 189 - 194
  • [3] On the complexity of computing Markov perfect equilibrium in general-sum stochastic games
    Deng, Xiaotie
    Li, Ningyuan
    Mguni, David
    Wang, Jun
    Yang, Yaodong
    [J]. NATIONAL SCIENCE REVIEW, 2023, 10 (01)
  • [4] On the complexity of computing Markov perfect equilibrium in general-sum stochastic games
    Xiaotie Deng
    Ningyuan Li
    David Mguni
    Jun Wang
    Yaodong Yang
    [J]. National Science Review, 2023, 10 (01) : 288 - 301
  • [5] PAC Reinforcement Learning Algorithm for General-Sum Markov Games
    Zehfroosh, Ashkan
    Tanner, Herbert G.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2821 - 2831
  • [6] Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games
    Mao, Weichao
    Basar, Tamer
    [J]. DYNAMIC GAMES AND APPLICATIONS, 2023, 13 (01) : 165 - 186
  • [7] Provably Efficient Reinforcement Learning in Decentralized General-Sum Markov Games
    Weichao Mao
    Tamer Başar
    [J]. Dynamic Games and Applications, 2023, 13 : 165 - 186
  • [8] Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games
    Bai, Yu
    Jin, Chi
    Wang, Huan
    Xiong, Caiming
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
    Zhang, Kaiqing
    Kakade, Sham M.
    Basar, Tamer
    Yang, Lin F.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [10] Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
    Zhang, Kaiqing
    Kakade, Sham M.
    Basar, Tamer
    Yang, Lin F.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24