4.2 Article

A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors

Journal

Publisher

SPRINGER BIRKHAUSER
DOI: 10.1007/s00041-021-09830-5

Keywords

Compressive sensing; Generative models; Convergence analysis; Gradient descent

Funding

  1. Fundamental Research Funds for the Central Universities [20720190060]
  2. National Natural Science Foundation of China [12001455]
  3. NSF CAREER Award [DMS-1848087]
  4. NSF [IIS-1816986, DMS-2022205]

Ask authors/readers for more resources

Deep generative modeling offers low-dimensional parameterizations of image or signal manifolds for recovery algorithms, with linear sample complexity scaling in input dimensionality. An algorithm based on gradient descent is presented under the assumption of a sufficiently expansive neural network generative model with Gaussian weights, providing recovery guarantees for compressive sensing under generative priors.
Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors' knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.2
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available