4.6 Article

Penalty Method for Constrained Distributed Quaternion-Variable Optimization

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 11, Pages 5631-5636

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3031687

Keywords

Quaternions; Optimization; Convex functions; Machine learning; Neurodynamics; Image color analysis; Cost function; Distributed optimization; machine learning; neural network; nonsmooth analysis; penalty method; quaternion

Funding

  1. Natural Science Foundation of Zhejiang Province of China [LR20F030001, D19A010003]
  2. National Natural Science Foundation of China [11671361, 61573096, 61973078, 61833005]
  3. National Training Programs of Innovation and Entrepreneurship [201610345020]
  4. Natural Science Foundation of Jiangsu Province of China [BK20170019]
  5. Jiangsu Provincial Key Laboratory of Networked Collective Intelligence [BM2017002]

Ask authors/readers for more resources

This article discusses constrained optimization problems in quaternion domain, presents differences in generalized gradient between real and quaternion domains, proposes an algorithm to transform constrained optimization problems into unconstrained setup, guarantees convergence property through Lyapunov-based technique and nonsmooth analysis, has potential for solving distributed neurodynamic optimization problems, and demonstrates efficiency of the obtained results with a numerical example involving machine learning.
This article studies the constrained optimization problems in the quaternion regime via a distributed fashion. We begin with presenting some differences for the generalized gradient between the real and quaternion domains. Then, an algorithm for the considered optimization problem is given, by which the desired optimization problem is transformed into an unconstrained setup. Using the tools from the Lyapunov-based technique and nonsmooth analysis, the convergence property associated with the devised algorithm is further guaranteed. In addition, the designed algorithm has the potential for solving distributed neurodynamic optimization problems as a recurrent neural network. Finally, a numerical example involving machine learning is given to illustrate the efficiency of the obtained results.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available