共 42 条
- [1] Independent Natural Policy Gradient Always Converges in Markov Potential Games INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
- [2] Linear Convergence of Independent Natural Policy Gradient in Games With Entropy Regularization IEEE CONTROL SYSTEMS LETTERS, 2024, 8 : 1217 - 1222
- [3] Independent Natural Policy Gradient Methods for Potential Games: Finite-time Global Convergence with Entropy Regularization 2022 IEEE 61ST CONFERENCE ON DECISION AND CONTROL (CDC), 2022, : 2833 - 2838
- [4] Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
- [5] Policy Gradient Play with Networked Agents in Markov Potential Games LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 211, 2023, 211
- [6] On the Global Convergence Rates of Decentralized Softmax Gradient Play in Markov Potential Games ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
- [8] Policy gradient method for team Markov games INTELLIGENT DAA ENGINEERING AND AUTOMATED LEARNING IDEAL 2004, PROCEEDINGS, 2004, 3177 : 733 - 739
- [9] Provable Policy Gradient Methods for Average-Reward Markov Potential Games INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
- [10] Fast Convergence in Semianonymous Potential Games IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2017, 4 (02): : 246 - 258