4.7 Article

Quasi-Monte Carlo Quasi-Newton in Variational Bayes

期刊

出版社

MICROTOME PUBL

关键词

Quasi-Monte Carlo; quasi-Newton; L-BFGS; numerical optimization; variational Bayes

资金

  1. National Science Foundation [IIS-1837931]

向作者/读者索取更多资源

Our study demonstrates that using randomized quasi-Monte Carlo sampling can improve optimization in machine learning problems, especially in cases where second order methods are not effective. In cases where the sampling method has a low root mean squared error, RQMC can achieve better optimization results.
Many machine learning problems optimize an objective that must be measured with noise. The primary method is a first order stochastic gradient descent using one or more Monte Carlo (MC) samples at each step. There are settings where ill-conditioning makes second order methods such as limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) more effective. We study the use of randomized quasi-Monte Carlo (RQMC) sampling for such problems. When MC sampling has a root mean squared error (RMSE) of O(n(-1/2)) then RQMC has an RMSE of o(n(-1/2)) that can be close to O(n(-3/2)) in favorable settings. We prove that improved sampling accuracy translates directly to improved optimization. In our empirical investigations for variational Bayes, using RQMC with stochastic quasi-Newton method greatly speeds up the optimization, and sometimes finds a better parameter value than MC does.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据