4.6 Article

A subgradient-based neural network to constrained distributed convex optimization

期刊

NEURAL COMPUTING & APPLICATIONS
卷 35, 期 14, 页码 9961-9971

出版社

SPRINGER LONDON LTD
DOI: 10.1007/s00521-022-07003-z

关键词

Nonsmooth distributed optimization; Multi-agent network; Neural network; Convergence

向作者/读者索取更多资源

With the development of artificial intelligence and big data, distributed optimization has shown great potential in machine learning. This paper proposes a novel neural network for cooperatively solving nonsmooth distributed optimization problems, and its effectiveness and practicality are demonstrated through simulation results and a practical application.
As artificial intelligence and large data develop, distributed optimization shows the great potential in the research of machine learning, particularly deep learning. As an important distributed optimization problem, the nonsmooth distributed optimization problem over an undirected multi-agent system with inequality and equality constraints frequently appears in deep learning. To deal with this optimization problem cooperatively, a novel neural network with lower dimension of solution space is presented. It is demonstrated that the state solution of proposed approach can enter the feasible region. Also, it can also prove that the state solution achieves consensus and finally converges to the optimal solution set. Moreover, the proposed approach here does not depend on the boundedness of the feasible region, which is a necessary assumption in some simplified neural network. Finally, some simulation results and a practical application are given to reveal the efficacy and practicability.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据