Adaptive Structural Hyper-Parameter Configuration by Q-Learning

被引:0
|
作者
Zhang, Haotian [1 ]
Sun, Jianyong [1 ]
Xu, Zongben [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Natl Engn Lab Big Data Analyt, Xian, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Reinforcement learning; evolutionary algorithm; hyper-parameter tuning; Q-learning; EVOLUTION STRATEGY; ADAPTATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tuning hyper-parameters for evolutionary algorithms is an important issue in computational intelligence. Performance of an evolutionary algorithm depends not only on its operation strategy design, but also on its hyper-parameters. Hyper-parameters can be categorized in two dimensions as structural/numerical and time-invariant/time-variant. Particularly, structural hyper-parameters in existing studies are usually tuned in advance for time-invariant parameters, or with hand-crafted scheduling for time-invariant parameters. In this paper, we make the first attempt to model the tuning of structural hyper-parameters as a reinforcement learning problem, and present to tune the structural hyper-parameter which controls computational resource allocation in the CEC 2018 winner algorithm by Q-learning. Experimental results show favorably against the winner algorithm on the CEC 2018 test functions.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Adaptive moving average Q-learning
    Tan, Tao
    Xie, Hong
    Xia, Yunni
    Shi, Xiaoyu
    Shang, Mingsheng
    KNOWLEDGE AND INFORMATION SYSTEMS, 2024, 66 (12) : 7389 - 7417
  • [22] Hyper-parameter Optimization for Latent Spaces
    Veloso, Bruno
    Caroprese, Luciano
    Konig, Matthias
    Teixeira, Sonia
    Manco, Giuseppe
    Hoos, Holger H.
    Gama, Joao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 249 - 264
  • [23] A new hyper-parameter optimization method for machine learning in fault classification
    Ye, Xingchen
    Gao, Liang
    Li, Xinyu
    Wen, Long
    APPLIED INTELLIGENCE, 2023, 53 (11) : 14182 - 14200
  • [24] Learning sparse linear dynamic networks in a hyper-parameter free setting
    Venkitaraman, Arun
    Hjalmarsson, Hakan
    Wahlberg, Bo
    IFAC PAPERSONLINE, 2020, 53 (02): : 82 - 86
  • [25] Continuous Hyper-parameter Configuration for Particle Swarm Optimization via Auto-tuning
    Rojas-Delgado, Jairo
    Milian Nunez, Vladimir
    Trujillo-Rasua, Rafael
    Bello, Rafael
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS (CIARP 2019), 2019, 11896 : 458 - 468
  • [26] Hyper-Parameter Tuning for Graph Kernels via Multiple Kernel Learning
    Massimo, Carlo M.
    Navarin, Nicolo
    Sperduti, Alessandro
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 214 - 223
  • [27] TRANSITIONAL ANNEALED ADAPTIVE SLICE SAMPLING FOR GAUSSIAN PROCESS HYPER-PARAMETER ESTIMATION
    Garbuno-Inigo, A.
    DiazDelaO, F. A.
    Zuev, K. M.
    INTERNATIONAL JOURNAL FOR UNCERTAINTY QUANTIFICATION, 2016, 6 (04) : 341 - 359
  • [28] Derivative-Free Optimization with Adaptive Experience for Efficient Hyper-Parameter Tuning
    Hu, Yi-Qi
    Liu, Zelin
    Yang, Hua
    Yu, Yang
    Liu, Yunfeng
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1207 - 1214
  • [29] Parameter specification for fuzzy clustering by Q-learning
    Oh, CH
    Ikeda, E
    Honda, K
    Ichihashi, H
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL IV, 2000, : 9 - 12
  • [30] A new hyper-parameter optimization method for machine learning in fault classification
    Xingchen Ye
    Liang Gao
    Xinyu Li
    Long Wen
    Applied Intelligence, 2023, 53 : 14182 - 14200