Meta Learning for Hyperparameter Optimization in Dialogue System

被引:22
|
作者
Chien, Jen-Tzung [1 ]
Lieow, Wei Xiang [1 ]
机构
[1] Natl Chiao Tung Univ, Dept Elect & Comp Engn, Hsinchu, Taiwan
来源
关键词
dialogue system; meta learning; Bayesian optimization; recurrent neural network;
D O I
10.21437/Interspeech.2019-1383
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
The performance of dialogue system based on deep reinforcement learning (DRL) highly depends on the selected hyperparameters in DRL algorithms. Traditionally, Gaussian process (GP) provides a probabilistic approach to Bayesian optimization for sequential search which is beneficial to select optimal hyperparameter. However, GP suffers from the expanding computation when the dimension of hyperparameters and the number of search points are increased. This paper presents a meta learning approach to carry out multifidelity Bayesian optimization where a two-level recurrent neural network (RNN) is developed for sequential learning and optimization. The search space is explored via the first-level RNN with cheap and low fidelity over a global region of hyperparameters. The optimization is then exploited and leveraged by the second-level RNN with a high fidelity on the successively small regions. The experiments on the hyperparameter optimization for dialogue system based on the deep Q network show the effectiveness and efficiency by using the proposed multifidelity Bayesian optimization.
引用
收藏
页码:839 / 843
页数:5
相关论文
共 50 条
  • [31] Hyperparameter optimization through context-based meta-reinforcement learning with task-aware representation
    Wu, Jia
    Liu, Xiyuan
    Chen, Senpeng
    KNOWLEDGE-BASED SYSTEMS, 2023, 260
  • [32] Multiagent Reinforcement Learning for Hyperparameter Optimization of Convolutional Neural Networks
    Iranfar, Arman
    Zapater, Marina
    Atienza, David
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (04) : 1034 - 1047
  • [33] Gradient-based Hyperparameter Optimization through Reversible Learning
    Maclaurin, Dougal
    Duvenaud, David
    Adams, Ryan P.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 2113 - 2122
  • [34] Hyperparameter Optimization in Machine Learning Algorithm for Extrapolation of Variational Calculations
    A. O. Belozerov
    A. I. Masur
    A. M. Shirokov
    Russian Physics Journal, 2022, 65 : 1065 - 1071
  • [35] Dynamical Hyperparameter Optimization via Deep Reinforcement Learning in Tracking
    Dong, Xingping
    Shen, Jianbing
    Wang, Wenguan
    Shao, Ling
    Ling, Haibin
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (05) : 1515 - 1529
  • [36] Fair Federated Learning with Multi-Objective Hyperparameter Optimization
    Wang, Chunnan
    Shi, Xiangyu
    Wang, Hongzhi
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (08)
  • [37] A review on multi-fidelity hyperparameter optimization in machine learning
    Won, Jonghyeon
    Lee, Hyun-Suk
    Lee, Jang-Won
    ICT EXPRESS, 2025, 11 (02): : 245 - 257
  • [38] Deep Learning Hyperparameter Optimization for Breast Mass Detection in Mammograms
    Sehgal, Adarsh
    Sehgal, Muskan
    La, Hung Manh
    Bebis, George
    ADVANCES IN VISUAL COMPUTING, ISVC 2022, PT II, 2022, 13599 : 270 - 283
  • [39] On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
    Zhang, Baohe
    Rajan, Raghu
    Pineda, Luis
    Lambert, Nathan
    Biedenkapp, Andre
    Chua, Kurtland
    Hutter, Frank
    Calandra, Roberto
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [40] Structure Learning and Hyperparameter Optimization Using an Automated Machine Learning (AutoML) Pipeline
    Filippou, Konstantinos
    Aifantis, George
    Papakostas, George A.
    Tsekouras, George E.
    INFORMATION, 2023, 14 (04)