4.6 Article

ON THE ASYMPTOTIC LINEAR CONVERGENCE SPEED OF ANDERSON ACCELERATION, NESTEROV ACCELERATION, AND NONLINEAR GMRES

Journal

SIAM JOURNAL ON SCIENTIFIC COMPUTING
Volume 43, Issue 5, Pages S21-S46

Publisher

SIAM PUBLICATIONS
DOI: 10.1137/20M1347139

Keywords

Anderson acceleration; Nesterov acceleration; nonlinear GMRES; asymptotic convergence; canonical tensor decomposition; alternating least squares

Funding

  1. NSERC of Canada [RGPIN-2019-04155]

Ask authors/readers for more resources

This study explores the application of nonlinear convergence acceleration methods in fixed-point iteration, confirming the significant improvement in asymptotic convergence behavior provided by methods such as Anderson acceleration and nonlinear GMRES. Theoretical quantification of this improvement is still lacking, motivating research under simplified conditions and numerical validation through coefficient optimization and comparison in the linear GMRES case.
We consider nonlinear convergence acceleration methods for fixed-point iteration x(k+1) = q(x(k)), including Anderson acceleration (AA), nonlinear GMRES (NGMRES), and Nesterov-type acceleration (corresponding to AA with window size one). We focus on fixed-point methods that converge asymptotically linearly with convergence factor rho < 1 and that solve an underlying fully smooth and nonconvex optimization problem. It is often observed that AA and NGMRES substantially improve the asymptotic convergence behavior of the fixed-point iteration, but this improvement has not been quantified theoretically. We investigate this problem under simplified conditions. First, we consider stationary versions of AA and NGMRES, and determine coefficients that result in optimal asymptotic convergence factors, given knowledge of the spectrum of q' (x) at the fixed point x*. This allows us to understand and quantify the asymptotic convergence improvement that can be provided by nonlinear convergence acceleration, viewing x(k+1) = q(x(k)) as a nonlinear preconditioner for AA and NGMRES. Second, for the case of infinite window size, we consider linear asymptotic convergence bounds for GMRES applied to the fixed-point iteration linearized about x*. Since AA and NGMRES are equivalent to GMRES in the linear case, one may expect the GMRES convergence factors to be relevant for AA and NGMRES as x(k) -> x*. Our results are illustrated numerically for a class of test problems from canonical tensor decomposition, comparing steepest descent and alternating least squares as the fixed-point iterations that are accelerated by AA and NGMRES. Our numerical tests show that both approaches allow us to estimate asymptotic convergence speed for nonstationary AA and NGMRES with finite window size.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available