4.5 Article

Restricted subgradient descend method for sparse signal learning

Journal

Publisher

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-022-01551-5

Keywords

Subgradient descend; Sparse signal learning; Subspace projection; Norm

Funding

  1. Natural Science Foundation of China [61703283]
  2. Laboratory for Artificial Intelligence in Design [RP3-3]
  3. Innovation and Technology Fund, Hong Kong SAR
  4. Guangdong Basic and Applied Basic Research Foundation [2021A1515011318, 2017A030310067]
  5. Shenzhen Municipal Science and Technology Innovation Council [JCYJ20190808113411274]
  6. Shenzhen Visual Object Detection and Recognition Key Laboratory Open Project [HITSZ20220287]
  7. Overseas High-Caliber Professional in Shenzhen [20190629729C]
  8. High-Level Professional in Shenzhen [20190716892H]
  9. Research Foundation for Postdoctor Worked in Shenzhen [707-0001300148, 707-0001310414]
  10. National Engineering Laboratory for Big Data System Computing Technology
  11. Guangdong Laboratory of Artificial-Intelligence and Cyber-Economics (SZ)
  12. Shenzhen Institute of Artificial Intelligence and Robotics for Society
  13. Scientific Research Foundation of Shenzhen University [2019049, 860-000002110328, 827-000526]

Ask authors/readers for more resources

This paper proposes a novel method called restricted subgradient descend for learning sparse signals. The method achieves accurate learning of high quality sparse solutions in finite iteration steps.
The sparse signal learning is essentially a sparse solution optimization problem. This technique is especially applicable to the field of signal recovery, e.g. image reconstruction. Such a problem can be solved by the gradient or subgradient descend method. However, conventional method normally needs to introduce extra quadratic term to construct complex objective function, whose solution costs many iteration steps. To address this problem, this paper proposes a novel method called restricted subgradient descend to learn the sparse signals. Our idea is based on the fact that the subgradient of 1-norm function exits at any n-dimensional point, and such a function even can obtain the gradient on the point without zero coordinate components. Thus, to decrease the objective function with regard to 1-norm value, the gradient or subgradient direction can be used to search next update of estimation, which facilitates the learning of the proposed method for high quality sparse solution with quick convergence time. Specifically, two algorithms are proposed, among which the first one uses merely restricted subspace projection scheme and the refined one is based on an improved version of the pivot step of simplex algorithm. It is analyzed that the refined algorithm is able to learn exactly the source sparse signal in finite iteration steps if the subgradient condition is satisfied. This theoretical result is also verified by numerical simulation with good experimental results compared with other state-of-the-art sparse signal learning algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available