4.8 Article

Self-correcting quantum many-body control using reinforcement learning with tensor networks

Journal

NATURE MACHINE INTELLIGENCE
Volume 5, Issue 7, Pages 780-791

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s42256-023-00687-5

Keywords

-

Ask authors/readers for more resources

Quantum many-body control is a key milestone in harnessing quantum technologies, and classically simulating these systems and designing optimal control protocols is challenging due to the exponential growth of the Hilbert space dimension. In this study, a framework combining reinforcement learning with matrix product states is proposed for efficient control of quantum many-body systems. The framework allows for control of larger systems than traditional neural network methods and retains the advantages of deep learning algorithms. The authors demonstrate that reinforcement learning agents can find universal controls, adapt control protocols, and learn to optimally steer previously unseen many-body states.
Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and, consequently, to devise reliable and robust optimal control protocols. Here we present a framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum-control problem by leveraging matrix product states (1) for representing the many-body state and (2) as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including states in the critical region. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states and of adapting control protocols on the fly when the quantum dynamics is subject to stochastic perturbations. Furthermore, we map our RL framework to a hybrid quantum-classical algorithm that can be performed on noisy intermediate-scale quantum devices and test it under the presence of experimentally relevant sources of noise. Optimal control of quantum many-body systems is needed to make use of quantum technologies, but is challenging due to the exponentially large dimension of the Hilbert space as a function of the number of qubits. Metz and Bukov propose a framework combining matrix product states and reinforcement learning that allows control of a larger number of interacting quantum particles than achievable with standard neural-network-based methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available