Multi-Fidelity Bayesian Optimization With Across-Task Transferable Max-Value Entropy Search

被引:0
|
作者
Zhang, Yunchuan [1 ]
Park, Sangwoo [1 ]
Simeone, Osvaldo [1 ]
机构
[1] Kings Coll London, Ctr Intelligent Informat Proc Syst CIIPS, Dept Engn, Kings Commun Learning & Informat Proc KCLIP Lab, London WC2R 2LS, England
基金
英国工程与自然科学研究理事会;
关键词
Optimization; Costs; Closed box; Linear programming; Bayes methods; Entropy; Resource management; Multitasking; Information theory; Vectors; Bayesian optimization; multi-fidelity simulation; entropy search; knowledge transfer; INFORMATION; BOUNDS; OUTPUT;
D O I
10.1109/TSP.2025.3528252
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In many applications, ranging from logistics to engineering, a designer is faced with a sequence of optimization tasks for which the objectives are in the form of black-box functions that are costly to evaluate. Furthermore, higher-fidelity evaluations of the optimization objectives often entail a larger cost. Existing multi-fidelity black-box optimization strategies select candidate solutions and fidelity levels with the goal of maximizing the information about the optimal value or the optimal solution for the current task. Assuming that successive optimization tasks are related, this paper introduces a novel information-theoretic acquisition function that balances the need to acquire information about the current task with the goal of collecting information transferable to future tasks. The proposed method transfers across tasks distributions over parameters of a Gaussian process surrogate model by implementing particle-based variational Bayesian updates. Theoretical insights based on the analysis of the expected regret substantiate the benefits of acquiring transferable knowledge across tasks. Furthermore, experimental results across synthetic and real-world examples reveal that the proposed acquisition strategy that caters to future tasks can significantly improve the optimization efficiency as soon as a sufficient number of tasks is processed.
引用
收藏
页码:418 / 432
页数:15
相关论文
共 50 条
  • [21] Accelerated NAS via Pretrained Ensembles and Multi-fidelity Bayesian Optimization
    Ouertatani, Houssem
    Maxim, Cristian
    Niar, Smail
    Talbi, El-Ghazali
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT I, 2024, 15016 : 245 - 260
  • [22] An Efficient Multi-fidelity Bayesian Optimization Approach for Analog Circuit Synthesis
    Zhang, Shuhan
    Lyu, Wenlong
    Yang, Fan
    Yan, Changhao
    Zhou, Dian
    Zeng, Xuan
    Hu, Xiangdong
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [23] A multi-fidelity Bayesian optimization approach based on the expected further improvement
    Leshi Shu
    Ping Jiang
    Yan Wang
    Structural and Multidisciplinary Optimization, 2021, 63 : 1709 - 1719
  • [24] High-dimensional multi-fidelity Bayesian optimization for quantum control
    Lazin, Marjuka F.
    Shelton, Christian R.
    Sandhofer, Simon N.
    Wong, Bryan M.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (04):
  • [25] A multi-fidelity Bayesian optimization approach based on the expected further improvement
    Shu, Leshi
    Jiang, Ping
    Wang, Yan
    STRUCTURAL AND MULTIDISCIPLINARY OPTIMIZATION, 2021, 63 (04) : 1709 - 1719
  • [26] Sequential- and Parallel- Constrained Max-value Entropy Search via Information Lower Bound
    Takeno, Shion
    Tamura, Tomoyuki
    Shitara, Kazuki
    Karasuyama, Masayuki
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [27] Multi-objective and multi-fidelity Bayesian optimization of laser-plasma acceleration
    Irshad, F.
    Karsch, S.
    Doepp, A.
    PHYSICAL REVIEW APPLIED, 2023, 19 (01):
  • [28] Falsification of Learning-Based Controllers through Multi-Fidelity Bayesian Optimization
    Shahrooei, Zahra
    Kochenderfer, Mykel J.
    Baheri, Ali
    2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [29] Multi-fidelity Bayesian optimization of covalent organic frameworks for xenon/krypton separations
    Gantzler, Nickolas
    Deshwal, Aryan
    Doppa, Janardhan Rao
    Simon, Cory M.
    DIGITAL DISCOVERY, 2023, 2 (06): : 1937 - 1956
  • [30] Batch Multi-Fidelity Bayesian Optimization with Deep Auto-Regressive Networks
    Li, Shibo
    Kirby, Robert M.
    Zhe, Shandian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34