On Provably Robust Meta-Bayesian Optimization

被引:0
|
作者
Dai, Zhongxiang [1 ]
Chen, Yizhou [1 ]
Yu, Haibin [2 ]
Low, Bryan Kian Hsiang [1 ]
Jaillet, Patrick [3 ]
机构
[1] Natl Univ Singapore, Dept Comp Sci, Singapore, Singapore
[2] Tencent, Dept Data Platform, Shenzhen, Peoples R China
[3] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
基金
新加坡国家研究基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Bayesian optimization (BO) has become popular for sequential optimization of black-box functions. When BO is used to optimize a target function, we often have access to previous evaluations of potentially related functions. This begs the question as to whether we can leverage these previous experiences to accelerate the current BO task through meta-learning (meta-BO), while ensuring robustness against potentially harmful dissimilar tasks that could sabotage the convergence of BO. This paper introduces two scalable and provably robust meta-BO algorithms: robust meta-Gaussian process-upper confidence bound (RM-GP-UCB) and RM-GP-Thompson sampling (RM-GP-TS). We prove that both algorithms are asymptotically no-regret even when some or all previous tasks are dissimilar to the current task, and show that RM-GP-UCB enjoys a better theoretical robustness than RM-GP-TS. We also exploit the theoretical guarantees to optimize the weights assigned to individual previous tasks through regret minimization via online learning, which diminishes the impact of dissimilar tasks and hence further enhances the robustness. Empirical evaluations show that (a) RM-GP-UCB performs effectively and consistently across various applications, and (b) RM-GPTS, despite being less robust than RM-GP-UCB both in theory and in practice, performs competitively in some scenarios with less dissimilar tasks and is more computationally efficient.
引用
收藏
页码:475 / 485
页数:11
相关论文
共 50 条
  • [31] Meta-Learning Priors for Safe Bayesian Optimization
    Rothfuss, Jonas
    Koenig, Christopher
    Rupenyan, Alisa
    Krause, Andreas
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 237 - 265
  • [32] Bias-Robust Bayesian Optimization via Dueling Bandits
    Kirschner, Johannes
    Krause, Andreas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] Bayesian Optimization for Distributionally Robust Chance-constrained Problem
    Inatsu, Yu
    Takeno, Shion
    Karasuyama, Masayuki
    Takeuchi, Ichiro
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [34] Fast Provably Robust Decision Trees and Boosting
    Guo, Jun-Qi
    Teng, Ming-Zhuo
    Gao, Wei
    Zhou, Zhi-Hua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [35] Ergodicity Breaking Provably Robust to Arbitrary Perturbations
    Stephen, David T.
    Hart, Oliver
    Nandkishore, Rahul M.
    PHYSICAL REVIEW LETTERS, 2024, 132 (04)
  • [36] Provably Adversarially Robust Nearest Prototype Classifiers
    Voracek, Vaclav
    Hein, Matthias
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [37] Provably Efficient Exploration in Policy Optimization
    Cai, Qi
    Yang, Zhuoran
    Jin, Chi
    Wang, Zhaoran
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [38] Inference meta models: Towards robust information fusion with Bayesian networks
    Pavlin, Gregor
    Nunnink, Jan
    2006 9TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, VOLS 1-4, 2006, : 1810 - 1817
  • [39] OPTIMIZATION AMONG PROVABLY EQUIVALENT PROGRAMS
    YOUNG, P
    JOURNAL OF THE ACM, 1977, 24 (04) : 693 - 700
  • [40] Provably Faster Algorithms for Bilevel Optimization
    Yang, Junjie
    Ji, Kaiyi
    Liang, Yingbin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34