4.6 Article

Application of reinforcement learning in the LHC tune feedback

期刊

FRONTIERS IN PHYSICS
卷 10, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fphy.2022.929064

关键词

LHC; beam-based controller; tune feedback; reinforcement learning; cern

资金

  1. Malta Council for Science and Technology
  2. Foundation for Science and Technology

向作者/读者索取更多资源

This study explores the beam-based control problem in the CERN Large Hadron Collider and improves an important code using reinforcement learning. The results from the simulation environment show that the performance of reinforcement learning agents surpasses the classical approach.
The Beam-Based Feedback System (BBFS) was primarily responsible for correcting the beam energy, orbit and tune in the CERN Large Hadron Collider (LHC). A major code renovation of the BBFS was planned and carried out during the LHC Long Shutdown 2 (LS2). This work consists of an explorative study to solve a beam-based control problem, the tune feedback (QFB), utilising state-of-the-art Reinforcement Learning (RL). A simulation environment was created to mimic the operation of the QFB. A series of RL agents were trained, and the best-performing agents were then subjected to a set of well-designed tests. The original feedback controller used in the QFB was reimplemented to compare the performance of the classical approach to the performance of selected RL agents in the test scenarios. Results from the simulated environment show that the RL agent performance can exceed the controller-based paradigm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据