Reinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuning

被引:7
|
作者
Kaiser, Jan [1 ]
Xu, Chenran [2 ]
Eichler, Annika [1 ,3 ]
Garcia, Andrea Santamaria [2 ]
Stein, Oliver [1 ]
Bruendermann, Erik [2 ]
Kuropka, Willi [1 ]
Dinter, Hannes [1 ]
Mayet, Frank [1 ]
Vinatier, Thomas [1 ]
Burkart, Florian [1 ]
Schlarb, Holger [1 ]
机构
[1] Deutsch Elektronen Synchrotron DESY, Hamburg, Germany
[2] Karlsruhe Inst Technol KIT, Karlsruhe, Germany
[3] Hamburg Univ Technol, D-21073 Hamburg, Germany
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
NEURAL-NETWORKS; DEEP;
D O I
10.1038/s41598-024-66263-y
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.
引用
收藏
页数:15
相关论文
共 37 条
  • [31] Online PID Tuning Strategy for Hydraulic Servo Control Systems via SAC-Based Deep Reinforcement Learning
    He, Jianhui
    Su, Shijie
    Wang, Hairong
    Chen, Fan
    Yin, BaoJi
    MACHINES, 2023, 11 (06)
  • [32] Directly-trained Spiking Neural Networks for Deep Reinforcement Learning: Energy efficient implementation of event-based obstacle avoidance on a neuromorphic accelerator
    Zanatta, Luca
    Di Mauro, Alfio
    Barchi, Francesco
    Bartolini, Andrea
    Benini, Luca
    Acquaviva, Andrea
    NEUROCOMPUTING, 2023, 562
  • [33] Metatrace Actor-Critic: Online Step-Size Tuning by Meta-Gradient Descent for Reinforcement Learning Control
    Young, Kenny
    Wang, Baoxiang
    Taylor, Matthew E.
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4185 - 4191
  • [34] Optimising online review inspired product attribute classification using the self-learning particle swarm-based Bayesian learning approach
    Maiyar, Lohithaksha M.
    Cho, SangJe
    Tiwari, Manoj Kumar
    Thoben, Klaus-Dieter
    Kiritsis, Dimitris
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2019, 57 (10) : 3099 - 3120
  • [35] An Energy-Efficient Deep Reinforcement Learning FPGA Accelerator for Online Fast Adaptation with Selective Mixed-precision Re-training
    Jo, Wooyoung
    Lee, Juhyoung
    Park, Seunghyun
    Yoo, Hoi-Jun
    IEEE ASIAN SOLID-STATE CIRCUITS CONFERENCE (A-SSCC 2021), 2021,
  • [36] Coordinating ride-pooling with public transit using Reward-Guided Conservative Q-Learning: An offline training and online fine-tuning reinforcement learning framework
    Hu, Yulong
    Dong, Tingting
    Li, Sen
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2025, 174
  • [37] A Data-Driven Reinforcement Learning Based Energy Management Strategy via Bridging Offline Initialization and Online Fine-Tuning for a Hybrid Electric Vehicle
    Hu, Bo
    Liu, Bocheng
    Zhang, Sunan
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2024, 71 (10) : 12869 - 12878