4.6 Article

Application of reinforcement learning in the LHC tune feedback

Journal

FRONTIERS IN PHYSICS
Volume 10, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/fphy.2022.929064

Keywords

LHC; beam-based controller; tune feedback; reinforcement learning; cern

Funding

  1. Malta Council for Science and Technology
  2. Foundation for Science and Technology

Ask authors/readers for more resources

This study explores the beam-based control problem in the CERN Large Hadron Collider and improves an important code using reinforcement learning. The results from the simulation environment show that the performance of reinforcement learning agents surpasses the classical approach.
The Beam-Based Feedback System (BBFS) was primarily responsible for correcting the beam energy, orbit and tune in the CERN Large Hadron Collider (LHC). A major code renovation of the BBFS was planned and carried out during the LHC Long Shutdown 2 (LS2). This work consists of an explorative study to solve a beam-based control problem, the tune feedback (QFB), utilising state-of-the-art Reinforcement Learning (RL). A simulation environment was created to mimic the operation of the QFB. A series of RL agents were trained, and the best-performing agents were then subjected to a set of well-designed tests. The original feedback controller used in the QFB was reimplemented to compare the performance of the classical approach to the performance of selected RL agents in the test scenarios. Results from the simulated environment show that the RL agent performance can exceed the controller-based paradigm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available