Journal
NEUROCOMPUTING
Volume 239, Issue -, Pages 165-180Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2017.02.014
Keywords
Sparse representation model; Analysis dictionary learning; Block coordinate descent framework; Incoherence; Proximal operator
Categories
Funding
- Grants-in-Aid for Scientific Research [16K05281, 16K00335] Funding Source: KAKEN
Ask authors/readers for more resources
In this study, we propose two analysis dictionary learning algorithms for sparse representation with analysis model. The problem is formulated with the l(1)-norm regularizer and with two penalty terms on the analysis dictionary: the term of -log det(Omega(T)Omega) and the coherence penalty term. As the processing scheme, we employ a block coordinate descent framework, so that the overall problem is transformed into a set of minimizations of univariate subproblems with respect to a single-vector variable. Each subproblem is still nonsmooth, but it can be solved by a proximal operator and then the closed-form solutions can be obtained directly and explicitly. In particular, the coherence penalty, excluding excessively similar or repeated dictionary atoms, is solved at the same time as the dictionary update, thereby reducing the complexity. Furthermore, a scheme with a group of atoms is introduced in one proposed algorithm, which has a lower complexity. According to our analysis and simulation study, the main advantages of the proposed algorithms are their greater dictionary recovery ratios especially in the low-cosparsity case, and their faster running time of reaching the stable values of the dictionary recovery ratios and the recovery cosparsity compared with state-of-the-art algorithms. In addition, one proposed algorithm performs well in image denoising and in noise cancellation. (C) 2017 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available