4.7 Article

Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization

出版社

ELSEVIER SCIENCE SA
DOI: 10.1016/j.cma.2020.112909

关键词

Optimal experimental design; Bayesian inference; Laplace approximation; Stochastic optimization; Accelerated gradient descent; Importance sampling

资金

  1. King Abdullah University of Science and Technology (KAUST), Saudi Arabia, KAUST CRG3 Award [2281]
  2. King Abdullah University of Science and Technology (KAUST), Saudi Arabia, KAUST CRG4 Award [2584]
  3. CNPq (National Counsel of Technological and Scientific Development), Brazil
  4. CAPES (Coordination of Superior Level Staff Improvement), Brazil

向作者/读者索取更多资源

Finding the best setup for experiments is the primary concern for Optimal Experimental Design (OED). Here, we focus on the Bayesian experimental design problem of finding the setup that maximizes the Shannon expected information gain. We use the stochastic gradient descent and its accelerated counterpart, which employs Nesterov's method, to solve the optimization problem in OED. We adapt a restart technique, originally proposed for the acceleration in deterministic optimization, to improve stochastic optimization methods. We combine these optimization methods with three estimators of the objective function: the double-loop Monte Carlo estimator (DLMC), the Monte Carlo estimator using the Laplace approximation for the posterior distribution (MCLA) and the double-loop Monte Carlo estimator with Laplace-based importance sampling (DLMCIS). Using stochastic gradient methods and Laplace-based estimators together allows us to use expensive and complex models, such as those that require solving partial differential equations (PDEs). From a theoretical viewpoint, we derive an explicit formula to compute the gradient estimator of the Monte Carlo methods, including MCLA and DLMCIS. From a computational standpoint, we study four examples: three based on analytical functions and one using the finite element method. The last example is an electrical impedance tomography experiment based on the complete electrode model. In these examples, the accelerated stochastic gradient descent method using MCLA converges to local maxima with up to five orders of magnitude fewer model evaluations than gradient descent with DLMC. (C) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据