Efficient hyperparameter optimization through model-based reinforcement learning

被引:44
|
作者
Wu, Jia [1 ]
Chen, SenPeng [1 ]
Liu, XiYuan [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu, Peoples R China
关键词
Hyperparameter optimization; Machine learning; Reinforcement learning;
D O I
10.1016/j.neucom.2020.06.064
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyperparameter tuning is critical for the performance of machine learning algorithms. However, a noticeable limitation is the high computational cost of algorithm evaluation for complex models or for large datasets, which makes the tuning process highly inefficient. In this paper, we propose a novel model-based method for efficient hyperparameter optimization. Firstly, we frame this optimization process as a reinforcement learning problem and then employ an agent to tune hyperparameters sequentially. In addition, a model that learns how to evaluate an algorithm is used to speed up the training. However, model inaccuracy is further exacerbated by long-term use, resulting in collapse performance. We propose a novel method for controlling the model use by measuring the impact of the model on the policy and limiting it to a proper range. Thus, the horizon of the model use can be dynamically adjusted. We apply the proposed method to tune the hyperparameters of the extreme gradient boosting and convolutional neural networks on 101 tasks. The experimental results verify that the proposed method achieves the highest accuracy on 86.1% of the tasks, compared with other state-of-the-art methods and the average ranking of runtime is significant lower than all methods by using the predictive model. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:381 / 393
页数:13
相关论文
共 50 条
  • [1] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
    Zhang, Baohe
    Rajan, Raghu
    Pineda, Luis
    Lambert, Nathan
    Biedenkapp, Andre
    Chua, Kurtland
    Hutter, Frank
    Calandra, Roberto
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [2] Deep Reinforcement Learning with Model-based Acceleration for Hyperparameter Optimization
    Chen, SenPeng
    Wu, Jia
    Chen, XiuYun
    [J]. 2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 170 - 177
  • [3] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Liu, Xiyuan
    Wu, Jia
    Chen, Senpeng
    [J]. SOFT COMPUTING, 2023, 27 (13) : 8661 - 8678
  • [4] Efficient hyperparameters optimization through model-based reinforcement learning with experience exploiting and meta-learning
    Xiyuan Liu
    Jia Wu
    Senpeng Chen
    [J]. Soft Computing, 2023, 27 : 8661 - 8678
  • [5] Efficient state synchronisation in model-based testing through reinforcement learning
    Turker, Uraz Cengiz
    Hierons, Robert M.
    Mousavi, Mohammad Reza
    Tyukin, Ivan Y.
    [J]. 2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 368 - 380
  • [6] An Efficient Approach to Model-Based Hierarchical Reinforcement Learning
    Li, Zhuoru
    Narayan, Akshay
    Leong, Tze-Yun
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3583 - 3589
  • [7] Efficient reinforcement learning: Model-based acrobot control
    Boone, G
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION - PROCEEDINGS, VOLS 1-4, 1997, : 229 - 234
  • [8] A context-based meta-reinforcement learning approach to efficient hyperparameter optimization
    Liu, Xiyuan
    Wu, Jia
    Chen, Senpeng
    [J]. NEUROCOMPUTING, 2022, 478 : 89 - 103
  • [9] Efficient Model-Based Concave Utility Reinforcement Learning through Greedy Mirror Descent
    Moreno, Bianca Marin
    Bregere, Margaux
    Gaillard, Pierre
    Oudjane, Nadia
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [10] Model-Based Reinforcement Learning for Quantized Federated Learning Performance Optimization
    Yang, Nuocheng
    Wang, Sihua
    Chen, Mingzhe
    Brinton, Christopher G.
    Yin, Changchuan
    Saad, Walid
    Cui, Shuguang
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5063 - 5068