4.6 Article

Meta-learning digitized-counterdiabatic quantum optimization

Journal

QUANTUM SCIENCE AND TECHNOLOGY
Volume 8, Issue 4, Pages -

Publisher

IOP Publishing Ltd
DOI: 10.1088/2058-9565/ace54a

Keywords

meta-learning; parameter concentration; shortcuts to adiabaticity (STA); counterdiabatic driving; QAOA; digitized-counterdiabatic quantum computing

Ask authors/readers for more resources

In this paper, we use meta-learning with recurrent neural networks to address the difficulties in finding suitable variational parameters and initial parameters for the QAOA. By combining meta-learning and counterdiabaticity, we find suitable variational parameters and reduce the number of optimization iterations required. Our method improves the performance of the state-of-the-art QAOA by offering a short-depth circuit ansatz with optimal initial parameters.
The use of variational quantum algorithms for optimization tasks has emerged as a crucial application for the current noisy intermediate-scale quantum computers. However, these algorithms face significant difficulties in finding suitable ansatz and appropriate initial parameters. In this paper, we employ meta-learning using recurrent neural networks to address these issues for the recently proposed digitized-counterdiabatic quantum approximate optimization algorithm (QAOA). By combining meta-learning and counterdiabaticity, we find suitable variational parameters and reduce the number of optimization iterations required. We demonstrate the effectiveness of our approach by applying it to the MaxCut problem and the Sherrington-Kirkpatrick model. Our method offers a short-depth circuit ansatz with optimal initial parameters, thus improving the performance of the state-of-the-art QAOA.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available