An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles

被引:0
|
作者
Cao, Zijian [1 ]
Xu, Kai [1 ]
Jia, Haowen [1 ]
Fu, Yanfang [1 ]
Foh, Chuan Heng [2 ]
Tian, Feng [3 ]
机构
[1] Xian Technol Univ, Sch Comp Sci & Engn, Xian 710021, Peoples R China
[2] Univ Surrey, Inst Commun Syst ICS, Guildford, England
[3] Kunshan Duke Univ, Kunshan 215316, Peoples R China
关键词
Differential evolution; Reinforcement learning; Q-learning; Cooperative countermeasures; Unmanned aerial vehicles; PARTICLE SWARM OPTIMIZATION; MUTATION STRATEGIES; ALGORITHM;
D O I
10.1016/j.asoc.2024.112605
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, reinforcement learning has been used to improve differential evolution algorithms due to its outstanding performance in strategy selection. However, most existing improved algorithms treat the entire population as a single reinforcement learning agent, applying the same decision to individuals regardless of their different evolutionary states. This approach neglects the differences among individuals within the population during evolution, reducing the likelihood of individuals evolving in promising directions. Therefore, this paper proposes an Autonomous Differential Evolution (AuDE) algorithm guided by the cumulative performance of individuals. In AuDE, at the individual level, the rate of increase in each individual's cumulative reward is used to guide the selection of appropriate search strategies. This ensures that all individuals accumulate experience from their own evolutionary search process, rather than relying on the experiences of others or the population, which may not align with their unique characteristics. Additionally, at the global level, a population backtracking method with stagnation detection is proposed. This method fully utilizes the learned cumulative experience information to enhance the global search ability of AuDE, thereby strengthening the search capability of the entire population. To verify the effectiveness and advantages of AuDE, 15 functions from CEC2015, 28 functions from CEC2017, and a real-world optimization problem on cooperative countermeasures of unmanned aerial vehicles were used to evaluate its performance compared with state-of-the-art DE variants. The experimental results indicate that the overall performance of AuDE is superior to other compared algorithms.
引用
收藏
页数:27
相关论文
共 50 条
  • [31] Cooperative Global Path Planning for Multiple Unmanned Aerial Vehicles Based on Improved Fireworks Algorithm Using Differential Evolution Operation
    Zhang, Xiangyin
    Zhang, Xiangsen
    Miao, Yang
    INTERNATIONAL JOURNAL OF AERONAUTICAL AND SPACE SCIENCES, 2023, 24 (05) : 1346 - 1362
  • [32] Autonomous cooperative wall building by a team of Unmanned Aerial Vehicles in the MBZIRC 2020 competition
    Baca, Tomas
    Penicka, Robert
    Stepan, Petr
    Petrlik, Matej
    Spurny, Vojtech
    Hert, Daniel
    Saska, Martin
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2023, 167
  • [33] Autonomous Cooperative Guidance Strategies for Unmanned Aerial Vehicles During On-Board Emergency
    Suresh, Manickam
    Swar, Sufal Chandra
    Shyam, Srinivasan
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2022, 20 (02): : 102 - 113
  • [34] Experiment Platform Design of Unmanned Aerial Vehicles and Research on Cooperative Autonomous Search Approach
    Luo, Huadong
    Ling, Haifeng
    Zhang, Guangna
    Zhu, Tao
    2018 10TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS (ICCCAS 2018), 2018, : 237 - 240
  • [35] Deep Reinforcement Learning Based Computation Offloading in Heterogeneous MEC Assisted by Ground Vehicles and Unmanned Aerial Vehicles
    He, Hang
    Ren, Tao
    Cui, Meng
    Liu, Dong
    Niu, Jianwei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, PT III, 2022, 13473 : 481 - 494
  • [36] Autonomous target following by unmanned aerial vehicles
    Rafi, Fahd
    Khan, Saad
    Shafiq, Khurram
    Shah, Mubarak
    UNMANNED SYSTEMS TECHNOLOGY VIII, PTS 1 AND 2, 2006, 6230
  • [37] Towards Using Reinforcement Learning for Autonomous Docking of Unmanned Surface Vehicles
    Holen, Martin
    Ruud, Else-Line Malene
    Warakagoda, Narada Dilp
    Goodwin, Morten
    Engelstad, Paal
    Knausgard, Kristian Muri
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 461 - 474
  • [38] Collaborative Path Planning based on MAXQ Hierarchical Reinforcement Learning for Manned/Unmanned Aerial Vehicles
    Yan, Yongjie
    Wang, Hongjie
    Chen, Xianfeng
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 4837 - 4842
  • [39] Online Learning-based Robust Visual Tracking for Autonomous Landing of Unmanned Aerial Vehicles
    Fu, Changhong
    Carrio, Adrian
    Olivares-Mendez, Miguel A.
    Campoy, Pascual
    2014 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2014, : 649 - 655
  • [40] Reinforcement Learning Based Assistive Collision Avoidance for Fixed-Wing Unmanned Aerial Vehicles
    d'Apolito, Francesco
    Sulzbachner, Christoph
    2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,