An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles

被引:0
|
作者
Cao, Zijian [1 ]
Xu, Kai [1 ]
Jia, Haowen [1 ]
Fu, Yanfang [1 ]
Foh, Chuan Heng [2 ]
Tian, Feng [3 ]
机构
[1] Xian Technol Univ, Sch Comp Sci & Engn, Xian 710021, Peoples R China
[2] Univ Surrey, Inst Commun Syst ICS, Guildford, England
[3] Kunshan Duke Univ, Kunshan 215316, Peoples R China
关键词
Differential evolution; Reinforcement learning; Q-learning; Cooperative countermeasures; Unmanned aerial vehicles; PARTICLE SWARM OPTIMIZATION; MUTATION STRATEGIES; ALGORITHM;
D O I
10.1016/j.asoc.2024.112605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, reinforcement learning has been used to improve differential evolution algorithms due to its outstanding performance in strategy selection. However, most existing improved algorithms treat the entire population as a single reinforcement learning agent, applying the same decision to individuals regardless of their different evolutionary states. This approach neglects the differences among individuals within the population during evolution, reducing the likelihood of individuals evolving in promising directions. Therefore, this paper proposes an Autonomous Differential Evolution (AuDE) algorithm guided by the cumulative performance of individuals. In AuDE, at the individual level, the rate of increase in each individual's cumulative reward is used to guide the selection of appropriate search strategies. This ensures that all individuals accumulate experience from their own evolutionary search process, rather than relying on the experiences of others or the population, which may not align with their unique characteristics. Additionally, at the global level, a population backtracking method with stagnation detection is proposed. This method fully utilizes the learned cumulative experience information to enhance the global search ability of AuDE, thereby strengthening the search capability of the entire population. To verify the effectiveness and advantages of AuDE, 15 functions from CEC2015, 28 functions from CEC2017, and a real-world optimization problem on cooperative countermeasures of unmanned aerial vehicles were used to evaluate its performance compared with state-of-the-art DE variants. The experimental results indicate that the overall performance of AuDE is superior to other compared algorithms.
引用
收藏
页数:27
相关论文
共 50 条
  • [1] Autonomous Obstacle Avoidance Algorithm for Unmanned Aerial Vehicles Based on Deep Reinforcement Learning
    Gao, Yuan
    Ren, Ling
    Shi, Tianwei
    Xu, Teng
    Ding, Jianbang
    ENGINEERING LETTERS, 2024, 32 (03) : 650 - 660
  • [2] Formation cooperative trajectory tracking control for unmanned aerial vehicles via differential game and reinforcement learning
    Wang, Xiaoheng
    Xiao, Zhihe
    Ren, Ziming
    Dong, Chunzhu
    Tian, Xuan Dan
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2024,
  • [3] Cooperative Landing on Mobile Platform for Multiple Unmanned Aerial Vehicles via Reinforcement Learning
    Xu, Yahao
    Li, Jingtai
    Wu, Bi
    Wu, Junqi
    Deng, Hongbin
    Hui, David
    JOURNAL OF AEROSPACE ENGINEERING, 2024, 37 (01)
  • [4] Automated Enemy Avoidance of Unmanned Aerial Vehicles Based on Reinforcement Learning
    Cheng, Qiao
    Wang, Xiangke
    Yang, Jian
    Shen, Lincheng
    APPLIED SCIENCES-BASEL, 2019, 9 (04):
  • [5] Vision-Based Autonomous Landing for Unmanned Aerial and Ground Vehicles Cooperative Systems
    Niu, Guanchong
    Yang, Qingkai
    Gao, Yunfan
    Pun, Man-On
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6234 - 6241
  • [6] Cooperatively pursuing a target unmanned aerial vehicle by multiple unmanned aerial vehicles based on multiagent reinforcement learning
    Wang X.
    Xuan S.
    Ke L.
    Advanced Control for Applications: Engineering and Industrial Systems, 2020, 2 (02):
  • [7] Autonomous Target Detection and Localization Using Cooperative Unmanned Aerial Vehicles
    Yoon, Youngrock
    Gruber, Scott
    Krakow, Lucas
    Pack, Daniel
    OPTIMIZATION AND COOPERATIVE CONTROL STRATEGIES, 2009, 381 : 195 - 205
  • [8] A Survey of Cyberattack Countermeasures for Unmanned Aerial Vehicles
    Kong, Peng-Yong
    IEEE ACCESS, 2021, 9 : 148244 - 148263
  • [9] Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles
    Grando, Ricardo B.
    de Jesus, Junior C.
    Drews-Jr, Paulo L. J.
    2020 XVIII LATIN AMERICAN ROBOTICS SYMPOSIUM, 2020 XII BRAZILIAN SYMPOSIUM ON ROBOTICS AND 2020 XI WORKSHOP OF ROBOTICS IN EDUCATION (LARS-SBR-WRE 2020), 2020, : 335 - 340
  • [10] Cooperative encirclement method for multiple unmanned ground vehicles based on reinforcement learning
    Su M.
    Wang Y.
    Pu R.
    Yu M.
    Gongcheng Kexue Xuebao/Chinese Journal of Engineering, 2024, 46 (07): : 1237 - 1250