An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles

被引:0
|
作者
Cao, Zijian [1 ]
Xu, Kai [1 ]
Jia, Haowen [1 ]
Fu, Yanfang [1 ]
Foh, Chuan Heng [2 ]
Tian, Feng [3 ]
机构
[1] Xian Technol Univ, Sch Comp Sci & Engn, Xian 710021, Peoples R China
[2] Univ Surrey, Inst Commun Syst ICS, Guildford, England
[3] Kunshan Duke Univ, Kunshan 215316, Peoples R China
关键词
Differential evolution; Reinforcement learning; Q-learning; Cooperative countermeasures; Unmanned aerial vehicles; PARTICLE SWARM OPTIMIZATION; MUTATION STRATEGIES; ALGORITHM;
D O I
10.1016/j.asoc.2024.112605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, reinforcement learning has been used to improve differential evolution algorithms due to its outstanding performance in strategy selection. However, most existing improved algorithms treat the entire population as a single reinforcement learning agent, applying the same decision to individuals regardless of their different evolutionary states. This approach neglects the differences among individuals within the population during evolution, reducing the likelihood of individuals evolving in promising directions. Therefore, this paper proposes an Autonomous Differential Evolution (AuDE) algorithm guided by the cumulative performance of individuals. In AuDE, at the individual level, the rate of increase in each individual's cumulative reward is used to guide the selection of appropriate search strategies. This ensures that all individuals accumulate experience from their own evolutionary search process, rather than relying on the experiences of others or the population, which may not align with their unique characteristics. Additionally, at the global level, a population backtracking method with stagnation detection is proposed. This method fully utilizes the learned cumulative experience information to enhance the global search ability of AuDE, thereby strengthening the search capability of the entire population. To verify the effectiveness and advantages of AuDE, 15 functions from CEC2015, 28 functions from CEC2017, and a real-world optimization problem on cooperative countermeasures of unmanned aerial vehicles were used to evaluate its performance compared with state-of-the-art DE variants. The experimental results indicate that the overall performance of AuDE is superior to other compared algorithms.
引用
收藏
页数:27
相关论文
共 50 条
  • [41] Optimal Cooperative Thermalling of Unmanned Aerial Vehicles
    Klesh, Andrew T.
    Kabamba, Pierre T.
    Girard, Anouck R.
    OPTIMIZATION AND COOPERATIVE CONTROL STRATEGIES, 2009, 381 : 355 - 369
  • [42] Cooperative control of a group of unmanned aerial vehicles
    Jia, QL
    Yan, JG
    Wang, XM
    ISTM/2005: 6TH INTERNATIONAL SYMPOSIUM ON TEST AND MEASUREMENT, VOLS 1-9, CONFERENCE PROCEEDINGS, 2005, : 7872 - 7875
  • [43] A Long-Term Target Search Method for Unmanned Aerial Vehicles Based on Reinforcement Learning
    Wei, Dexing
    Zhang, Lun
    Yang, Mei
    Deng, Hanqiang
    Huang, Jian
    DRONES, 2024, 8 (10)
  • [44] Envelope Protection for Autonomous Unmanned Aerial Vehicles
    Yavrucuk, IlKay
    Prasad, J. V. R.
    Unnikrishnan, Suraj
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2009, 32 (01) : 248 - 261
  • [45] Autonomous mission management for unmanned aerial vehicles
    Barbier, M
    Chanthery, E
    AEROSPACE SCIENCE AND TECHNOLOGY, 2004, 8 (04) : 359 - 368
  • [46] A Reinforcement Learning-Based Fire Warning and Suppression System Using Unmanned Aerial Vehicles
    Panahi, Fereidoun H.
    Panahi, Farzad H.
    Ohtsuki, Tomoaki
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [47] Research on the Method of Trajectory Planning for Unmanned Aerial Vehicles in Complex Terrains Based on Reinforcement Learning
    Wang, Ruichang
    Hu, Weijun
    Ma, Xianlong
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT II, 2025, 15202 : 287 - 295
  • [48] Autonomous Control of Combat Unmanned Aerial Vehicles to Evade Surface-to-Air Missiles Using Deep Reinforcement Learning
    Lee, Gyeong Taek
    Kim, Chang Ouk
    IEEE ACCESS, 2020, 8 : 226724 - 226736
  • [49] Biologically eagle-eye-based autonomous aerial refueling for unmanned aerial vehicles
    Duan, Haibin
    Zhang, Qifu
    Deng, Yimin
    Zhang, Xiangyin
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2014, 35 (07): : 1450 - 1457
  • [50] Autonomous Navigation for Unmanned Aerial Vehicles Based on Chaotic Bionics Theory
    Yu, Xiao-lei
    Sun, Yong-rong
    Liu, Jian-ye
    Chen, Bing-wen
    JOURNAL OF BIONIC ENGINEERING, 2009, 6 (03) : 270 - 279