4.7 Article

Risk optimization using the Chernoff bound and stochastic gradient descent

期刊

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.ress.2022.108512

关键词

Risk optimization; Chernoff bound; Stochastic gradient descent; Stochastic optimization

资金

  1. Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) [001]
  2. Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - CNPQ [307133/2020-6]

向作者/读者索取更多资源

This paper proposes a stochastic gradient based method for Risk Optimization problems, approximating failure probabilities using the Chernoff bound and solving the problem with a Stochastic Gradient Descent algorithm. The approach efficiently avoids direct computation of failure probabilities and their gradients.
This paper proposes a stochastic gradient based method for the solution of Risk Optimization (RO) problems. The proposed approach approximates the probability of failure evaluation by an expectation computation with the aid of the Chernoff bound. The resulting approximate problem is then solved using a Stochastic Gradient Descent (SGD) algorithm. Computational efficiency comes from the fact that the Chernoff bound avoids not only the direct computation of the failure probabilities during the optimization process, but also the computation of their gradients with respect to the design variables. Finally, to ensure the quality of the failure probability approximation, we propose a procedure to iteratively adjust the Chernoff bound parameters during the optimization procedure. Three numerical examples are presented to validate the methodology. The proposed approach succeeded in converging to the optimal solution in all cases.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据