4.7 Article

Quasi-Monte Carlo Quasi-Newton in Variational Bayes

Journal

JOURNAL OF MACHINE LEARNING RESEARCH
Volume 22, Issue -, Pages -

Publisher

MICROTOME PUBL

Keywords

Quasi-Monte Carlo; quasi-Newton; L-BFGS; numerical optimization; variational Bayes

Funding

  1. National Science Foundation [IIS-1837931]

Ask authors/readers for more resources

Our study demonstrates that using randomized quasi-Monte Carlo sampling can improve optimization in machine learning problems, especially in cases where second order methods are not effective. In cases where the sampling method has a low root mean squared error, RQMC can achieve better optimization results.
Many machine learning problems optimize an objective that must be measured with noise. The primary method is a first order stochastic gradient descent using one or more Monte Carlo (MC) samples at each step. There are settings where ill-conditioning makes second order methods such as limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) more effective. We study the use of randomized quasi-Monte Carlo (RQMC) sampling for such problems. When MC sampling has a root mean squared error (RMSE) of O(n(-1/2)) then RQMC has an RMSE of o(n(-1/2)) that can be close to O(n(-3/2)) in favorable settings. We prove that improved sampling accuracy translates directly to improved optimization. In our empirical investigations for variational Bayes, using RQMC with stochastic quasi-Newton method greatly speeds up the optimization, and sometimes finds a better parameter value than MC does.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available