Learning to select the recombination operator for derivative-free optimization

被引:0
|
作者
Zhang, Haotian [1 ]
Sun, Jianyong [1 ]
Back, Thomas [2 ]
Xu, Zongben [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Peoples R China
[2] Leiden Univ, Leiden Inst Adv Comp Sci, NL-2333 CA Leiden, Netherlands
基金
中国国家自然科学基金;
关键词
evolutionary algorithm; differential evolution; adaptive operator selection; reinforcement learning; deep learning; DIFFERENTIAL EVOLUTION ALGORITHM; ADAPTATION; PARAMETERS; ENSEMBLE; STRATEGY;
D O I
10.1007/s11425-023-2252-9
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Extensive studies on selecting recombination operators adaptively, namely, adaptive operator selection (AOS), during the search process of an evolutionary algorithm (EA), have shown that AOS is promising for improving EA's performance. A variety of heuristic mechanisms for AOS have been proposed in recent decades, which usually contain two main components: the feature extraction and the policy setting. The feature extraction refers to as extracting relevant features from the information collected during the search process. The policy setting means to set a strategy (or policy) on how to select an operator from a pool of operators based on the extracted feature. Both components are designed by hand in existing studies, which may not be efficient for adapting optimization problems. In this paper, a generalized framework is proposed for learning the components of AOS for one of the main streams of EAs, namely, differential evolution (DE). In the framework, the feature extraction is parameterized as a deep neural network (DNN), while a Dirichlet distribution is considered to be the policy. A reinforcement learning method, named policy gradient, is used to train the DNN. As case studies, the proposed framework is applied to two DEs including the classic DE and a recently-proposed DE, which result in two new algorithms named PG-DE and PG-MPEDE, respectively. Experiments on the Congress of Evolutionary Computation (CEC) 2018 test suite show that the proposed new algorithms perform significantly better than their counterparts. Finally, we prove theoretically that the considered classic methods are the special cases of the proposed framework.
引用
下载
收藏
页码:1457 / 1480
页数:24
相关论文
共 50 条
  • [1] Learning to select the recombination operator for derivative-free optimization
    Haotian Zhang
    Jianyong Sun
    Thomas B?ck
    Zongben Xu
    Science China Mathematics, 2024, 67 (06) : 1457 - 1480
  • [2] Inexact Derivative-Free Optimization for Bilevel Learning
    Ehrhardt, Matthias J.
    Roberts, Lindon
    JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2021, 63 (05) : 580 - 600
  • [3] Inexact Derivative-Free Optimization for Bilevel Learning
    Matthias J. Ehrhardt
    Lindon Roberts
    Journal of Mathematical Imaging and Vision, 2021, 63 : 580 - 600
  • [4] Decomposition in derivative-free optimization
    Kaiwen Ma
    Nikolaos V. Sahinidis
    Sreekanth Rajagopalan
    Satyajith Amaran
    Scott J Bury
    Journal of Global Optimization, 2021, 81 : 269 - 292
  • [5] Derivative-free optimization methods
    Larson, Jeffrey
    Menickelly, Matt
    Wild, Stefan M.
    ACTA NUMERICA, 2019, 28 : 287 - 404
  • [6] Efficient derivative-free optimization
    Belitz, Paul
    Bewley, Thomas
    PROCEEDINGS OF THE 46TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, 2007, : 5607 - 5612
  • [7] Decomposition in derivative-free optimization
    Ma, Kaiwen
    Sahinidis, Nikolaos V.
    Rajagopalan, Sreekanth
    Amaran, Satyajith
    Bury, Scott J.
    JOURNAL OF GLOBAL OPTIMIZATION, 2021, 81 (02) : 269 - 292
  • [8] SURVEY OF DERIVATIVE-FREE OPTIMIZATION
    Xi, Min
    Sun, Wenyu
    Chen, Jun
    NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2020, 10 (04): : 537 - 555
  • [9] Derivative-Free and Blackbox Optimization
    Huyer, W.
    MONATSHEFTE FUR MATHEMATIK, 2020, 192 (02): : 480 - 480
  • [10] ZOOpt: a toolbox for derivative-free optimization
    Liu, Yu-Ren
    Hu, Yi-Qi
    Qian, Hong
    Qian, Chao
    Yu, Yang
    SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (10)