Intelligent decision making and target assignment of multi-aircraft air combat based on the LSTM–PPO algorithm

被引:0
|
作者
Ding Y. [1 ]
Kuang M. [1 ]
Zhu J. [2 ]
Zhu J. [2 ]
Qiao Z. [2 ]
机构
[1] Xinjiang University, College of Computer Science and Technology, Wulumuqi
[2] Tsinghua University, Precision Instrument System, Beijing
关键词
dynamic target assignment; intelligent decision; multi-aircraft air combat; proximal policy optimization; threat assessment;
D O I
10.13374/j.issn2095-9389.2023.10.13.003
中图分类号
学科分类号
摘要
With the rapid development of intelligent and informationized air battlefields, intelligent air combat has increasingly become key to affecting the outcome of a battlefield. In conventional multi-aircraft air combat, there are issues of low efficiency in intelligent decision-making, difficulty in meeting the needs of complex air combat environments, and unreasonable target allocation. In response to the problems in conventional multi-aircraft air combat, we introduce a long short-term memory–proximal policy optimization algorithm (LSTM–PPO). Using the long short-term memory network to extract features and perceive the situation of the state, an intelligent agent trains the normalized and feature-fused state information residual network and value network, chooses the optimal action through the proximal policy optimization strategy based on the current situation, and embeds a reward function containing expert knowledge during the training process to solve the problem of sparse rewards. Meanwhile, a target allocation algorithm based on threat value calculation is presented. Using angle, speed, and height threat values as the basis for target allocation, the ID of the target aircraft with the highest threat value on the battlefield is calculated in real-time. When the strategy network outputs an action of attack, it conducts target allocation. To confirm the effectiveness of the algorithm, we carried out 4v4 multi-aircraft air combat experiments in a digital twin simulation environment built by our research group. The red team consists of reinforcement learning agents based on LSTM–PPO algorithm, whereas the blue team comprises a finite state machine composed of expert knowledge bases. After more than 1200 rounds of aerial confrontation, the algorithm has been converged, and the win rate of the red team has reached 82%. Furthermore, we assessed the performance of four other mainstream reinforcement learning algorithms in 4v4 air combat experiments under the same experimental conditions. It is shown that the deep Q-network (DQN) and soft actor-critic (SAC) algorithms have difficulties in dealing with high-dimensional continuous action spaces and multiagent collaboration. The multi-agent deep deterministic policy gradient algorithm (MADDPG) employs a multi-agent strategy and cooperative training, so it exhibits a significantly higher win rate than the DQN and SAC algorithms. The multi-agent proximal policy optimization (MAPPO) algorithm has a relatively high failure rate and is not stable enough to deal with enemy aircraft’s strategies in some cases. The LSTM–PPO algorithm shows a significantly higher win rate than other mainstream reinforcement learning algorithms in multi-aircraft collaborative air combat, which confirms the effectiveness of the LSTM–PPO algorithm in dealing with high-dimensional continuous action spaces and multi-aircraft collaborative operations. © 2024 Science Press. All rights reserved.
引用
收藏
页码:1179 / 1186
页数:7
相关论文
共 27 条
  • [1] Duan H B, Li P., Autonomous control for unmanned aerial vehicle swarms based on biological collective behaviors, Sci Technol Rev, 35, 7, (2017)
  • [2] Deng K, Peng X Q, Zhou D Y., Study on air combat decision method of UAV based on matrix game and genetic algorithm, Fire Contr Command Contr, 44, 12, (2019)
  • [3] Xu G D, Lv C, Wang G H, Et al., Research on UCAV autonomous air combat maneuvering decision-making based on bi-matrix game, Ship Electron Eng, 37, 11, (2017)
  • [4] Su M C, Lai S C, Lin S C, Et al., A new approach to multi-aircraft air combat assignments, Swarm Evol Comput, 6, (2012)
  • [5] Wan W, Jiang C S, Wu Q X., Application of one-step prediction influence diagram in air combat maneuvering decision, Electron Opt Contr, 16, 7, (2009)
  • [6] Virtanen K, Raivio T, Hamalainen R P., Modeling pilot’s sequential maneuvering decisions by a multistage influence diagram, J Guid Contr Dyn, 27, 4, (2004)
  • [7] Pan Q, Zhou D Y, Huang J C, Et al., Maneuver decision for cooperative close-range air combat based on state predicted influence diagram, 2017 IEEE International Conference on Information and Automation (ICIA), (2017)
  • [8] Ma Y Y, Wang G Q, Hu X X, Et al., Cooperative occupancy decision making of multi-UAV in beyond-visual-range air combat: A game theory approach, IEEE Access, 8, (2019)
  • [9] Li S Y, Chen M, Wang Y H, Et al., Air combat decision-making of multiple UCAVs based on constraint strategy games, Def Technol, 18, 3, (2022)
  • [10] Wang X, Wang W J, Song K P, Et al., UAV air combat decision based on evolutionary expert system tree, Ordnance Ind Autom, 38, 1, (2019)