Reinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuning

被引:7
|
作者
Kaiser, Jan [1 ]
Xu, Chenran [2 ]
Eichler, Annika [1 ,3 ]
Garcia, Andrea Santamaria [2 ]
Stein, Oliver [1 ]
Bruendermann, Erik [2 ]
Kuropka, Willi [1 ]
Dinter, Hannes [1 ]
Mayet, Frank [1 ]
Vinatier, Thomas [1 ]
Burkart, Florian [1 ]
Schlarb, Holger [1 ]
机构
[1] Deutsch Elektronen Synchrotron DESY, Hamburg, Germany
[2] Karlsruhe Inst Technol KIT, Karlsruhe, Germany
[3] Hamburg Univ Technol, D-21073 Hamburg, Germany
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
NEURAL-NETWORKS; DEEP;
D O I
10.1038/s41598-024-66263-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.
引用
收藏
页数:15
相关论文
共 37 条
  • [1] Multiobjective Bayesian optimization for online accelerator tuning
    Roussel, Ryan
    Hanuka, Adi
    Edelen, Auralee
    PHYSICAL REVIEW ACCELERATORS AND BEAMS, 2021, 24 (06):
  • [2] Robust reinforcement learning with bayesian optimisation and quadrature
    Paul, Supratik
    Chatzilygeroudis, Konstantinos
    Ciosek, Kamil
    Mouret, Jean-Baptiste
    Osborne, Michael A.
    Whiteson, Shimon
    Journal of Machine Learning Research, 2020, 21
  • [3] Robust Reinforcement Learning with Bayesian Optimisation and Quadrature
    Paul, Supratik
    Chatzilygeroudis, Konstantinos
    Ciosek, Kamil
    Mouret, Jean-Baptiste
    Osborne, Michael A.
    Whiteson, Shimon
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [4] Online Reinforcement Learning by Bayesian Inference
    Xia, Zhongpu
    Zhao, Dongbin
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [5] Particle Swarm Optimisation for learning Bayesian Networks
    Cowie, J.
    Oteniya, L.
    Coles, R.
    WORLD CONGRESS ON ENGINEERING 2007, VOLS 1 AND 2, 2007, : 71 - +
  • [6] Online reinforcement learning control by Bayesian inference
    Xia, Zhongpu
    Zhao, Dongbin
    IET CONTROL THEORY AND APPLICATIONS, 2016, 10 (12): : 1331 - 1338
  • [7] Optimal tuning of continual online exploration in reinforcement learning
    Achbany, Youssef
    Fouss, Francois
    Yen, Luh
    Pirotte, Alain
    Saerens, Marco
    ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 790 - 800
  • [8] Lifelong Incremental Reinforcement Learning With Online Bayesian Inference
    Wang, Zhi
    Chen, Chunlin
    Dong, Daoyi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) : 4003 - 4016
  • [9] A Novel Deep Reinforcement Learning Algorithm for Online Antenna Tuning
    Balevi, Eren
    Andrews, Jeffrey G.
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [10] Online Tuning for Offline Decentralized Multi-Agent Reinforcement Learning
    Jiang, Jiechuan
    Lu, Zongqing
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8050 - +