4.7 Article

Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization-Part I: Algorithms and Convergence Analysis

期刊

IEEE TRANSACTIONS ON SIGNAL PROCESSING
卷 68, 期 -, 页码 4108-4122

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2020.3001906

关键词

Signal processing algorithms; Optimization; Couplings; Relays; Convergence; Minimization; Signal processing; Penalty method; dual decomposition; BSUM; KKT; augmented Lagrangian; nonconvex optimization

资金

  1. National Key Research and Development Project [2017YFE0119300]
  2. NSFC [61671411, 61731018, U1709219]
  3. National Science Foundation [CIF-1910385, CMMI-172775]
  4. Army Research Office [W911NF-19-1-0247]

向作者/读者索取更多资源

Many contemporary signal processing, machine learning and wireless communication applications can be formulated as nonconvex nonsmooth optimization problems. Often there is a lack of efficient algorithms for these problems, especially when the optimization variables are nonlinearly coupled in some nonconvex constraints. In this work, we propose an algorithm named penalty dual decomposition (PDD) for these difficult problems and discuss its various applications. The PDD is a double-loop iterative algorithm. Its inner iteration is used to inexactly solve a nonconvex nonsmooth augmented Lagrangian problem via block-coordinate-descent-type methods, while its outer iteration updates the dual variables and/or a penalty parameter. In Part I of this work, we describe the PDD algorithm and establish its convergence to KKT solutions. In Part II we evaluate the performance of PDD by customizing it to three applications arising from signal processing and wireless communications.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据