4.7 Article

Lazy Queries Can Reduce Variance in Zeroth-Order Optimization

Journal

IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 71, Issue -, Pages 3695-3709

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2023.3316887

Keywords

Query efficiency; zeroth-order optimization; machine learning

Ask authors/readers for more resources

This paper proposes a novel gradient estimation technique called LAZO for zeroth-order (ZO) methods. By adaptively reusing old queries, LAZO constructs low-variance gradient estimates, reducing the query complexity and achieving performance improvement in regret and query complexity relative to existing ZO methods.
A major challenge of applying zeroth-order (ZO) methods is the high query complexity, especially when queries are costly. We propose a novel gradient estimation technique for ZO methods based on adaptive lazy queries that we term as LAZO. Unlike the classic one-point or two-point gradient estimation methods, LAZO develops two alternative ways to check the usefulness of old queries from previous iterations, and then adaptively reuses them to construct the low-variance gradient estimates. We rigorously establish that through judiciously reusing the old queries, LAZO can reduce the variance of stochastic gradient estimates so that it not only saves queries per iteration but also achieves the regret bound for the symmetric two-point method. We evaluate the numerical performance of LAZO, and demonstrate the low-variance property and the performance gain of LAZO in both regret and query complexity relative to several existing ZO methods. The idea of LAZO is general and can be applied to other variants of ZO methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available