4.8 Article

Quantum variational algorithms are swamped with traps

Journal

NATURE COMMUNICATIONS
Volume 13, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41467-022-35364-5

Keywords

-

Funding

  1. National Science Foundation Graduate Research Fellowship Program [4000063445]
  2. MIT Energy Initiative fellowship

Ask authors/readers for more resources

One of the most important properties of classical neural networks is their surprising trainability, while variational quantum models are often not trainable. Previous research focused on barren plateaus as a major obstacle, but this study shows that it's just part of the story. We prove that a wide class of shallow variational quantum models without barren plateaus have a superpolynomially small fraction of local minima within any constant energy from the global minimum, making them untrainable without a good initial guess of the optimal parameters. Additionally, we demonstrate that noisy optimization of various quantum models is impossible with a sub-exponential number of queries. Numerical results support our findings on different problem instances.
One of the most important properties of classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike the case in classical neural networks, variational quantum models are often not trainable. The most studied phenomenon is the onset of barren plateaus in the training landscape of these quantum models, typically when the models are very deep. This focus on barren plateaus has made the phenomenon almost synonymous with the trainability of quantum models. Here, we show that barren plateaus are only a part of the story. We prove that a wide class of variational quantum models-which are shallow, and exhibit no barren plateaus-have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering these models untrainable if no good initial guess of the optimal parameters is known. We also study the trainability of variational quantum algorithms from a statistical query framework, and show that noisy optimization of a wide variety of quantum models is impossible with a sub-exponential number of queries. Finally, we numerically confirm our results on a variety of problem instances. Though we exclude a wide variety of quantum algorithms here, we give reason for optimism for certain classes of variational algorithms and discuss potential ways forward in showing the practical utility of such algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available