4.6 Article

Sparse Bayesian dictionary learning with a Gaussian hierarchical model

Journal

SIGNAL PROCESSING
Volume 130, Issue -, Pages 93-104

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.sigpro.2016.06.016

Keywords

Dictionary learning; Gaussian-inverse Gamma prior; Variational Bayesian; Gibbs sampling

Funding

  1. National Science Foundation of China [61428103, 61522104]
  2. National Science Foundation [ECCS-1408182]
  3. Directorate For Engineering
  4. Div Of Electrical, Commun & Cyber Sys [1408182] Funding Source: National Science Foundation

Ask authors/readers for more resources

We consider a dictionary learning problem aimed at designing a dictionary such that the signals admit a sparse or an approximate sparse representation over the learnt dictionary. The problem finds a variety of applications including image denoising, feature extraction, etc. In this paper, we propose a new hierarchical Bayesian model for dictionary learning, in which a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of the representation. Suitable non-informative priors are also placed on the dictionary and the noise variance such that they can be reliably estimated from the data. Based on the hierarchical model, a variational Bayesian method and a Gibbs sampling method are developed for Bayesian inference. The proposed methods have the advantage that they do not require the knowledge of the noise valiance a priori. Numerical results show that the proposed methods are able to learn the dictionary with an accuracy better than existing methods, particularly for the case where there is a limited number of training signals. (C) 2016 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available