4.6 Article

A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization

期刊

INVERSE PROBLEMS
卷 37, 期 7, 页码 -

出版社

IOP Publishing Ltd
DOI: 10.1088/1361-6420/ac0966

关键词

non-convex optimization; stochastic ADMM; variance reduction stochastic gradient

资金

  1. National key research and development program [2017YFB0202902]
  2. National Natural Science Foundation of China [11771288, 12090024]
  3. Leverhulme Trust
  4. Newton Trust

向作者/读者索取更多资源

The paper proposes combining ADMM with a class of variance-reduced stochastic gradient estimators for solving large-scale non-convex and non-smooth optimization problems. Global convergence is established under the additional assumption that the object function satisfies Kurdyka-Lojasiewicz property, and numerical experiments are conducted to demonstrate the performance of the proposed methods.
Alternating direction method of multipliers (ADMM) is a popular first-order method owing to its simplicity and efficiency. However, similar to other proximal splitting methods, the performance of ADMM degrades significantly when the scale of optimization problems to solve becomes large. In this paper, we consider combining ADMM with a class of variance-reduced stochastic gradient estimators for solving large-scale non-convex and non-smooth optimization problems. Global convergence of the generated sequence is established under the additional assumption that the object function satisfies Kurdyka-Lojasiewicz property. Numerical experiments on graph-guided fused lasso and computed tomography are presented to demonstrate the performance of the proposed methods.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据