Deep Reinforcement Learning for Nash Equilibrium of Differential Games

被引:2
|
作者
Li, Zhenyu [1 ,2 ]
Luo, Yazhong [1 ]
机构
[1] Natl Univ Def Technol, Coll Aerosp Sci & Engn, Changsha 410073, Peoples R China
[2] Beijing Inst Tracking & Telecommun Technol, Beijing 100094, Peoples R China
基金
中国国家自然科学基金;
关键词
Games; Nash equilibrium; Differential games; Reinforcement learning; Heuristic algorithms; Mathematical models; Artificial neural networks; Deep reinforcement learning (DRL); differential games; spacecraft pursuit-evasion; symplectic policy gradient theorem; ALGORITHM; LEVEL;
D O I
10.1109/TNNLS.2024.3351631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nash equilibrium is a significant solution concept representing the optimal strategy in an uncooperative multiagent system. This study presents two deep reinforcement learning (DRL) algorithms for solving the Nash equilibrium of differential games. Both algorithms are built upon the distributed distributional deep deterministic policy gradient (D4PG) algorithm, which is a one-sided learning method. We modified it to a two-sided adversarial learning method. The first is D4PG for games (D4P2G), which directly applies an adversarial play framework based on the D4PG. A simultaneous policy gradient descent (SPGD) method is employed to optimize the policies of the players with conflicting objectives. The second is the distributional deep deterministic symplectic policy gradient (D4SPG) algorithm, which is our main contribution. More specifically, it newly designs a minimax learning framework that combines the critics of the two players and proposes a symplectic policy gradient adjustment method to find a better policy gradient. Simulations show that both algorithms converge to the Nash equilibrium in most cases, but D4SPG can learn the Nash equilibrium more accurately and efficiently, especially in Hamiltonian games. Moreover, it can handle games with complex dynamics, which is challenging for traditional methods.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条