4.2 Article

A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors

期刊

出版社

SPRINGER BIRKHAUSER
DOI: 10.1007/s00041-021-09830-5

关键词

Compressive sensing; Generative models; Convergence analysis; Gradient descent

资金

  1. Fundamental Research Funds for the Central Universities [20720190060]
  2. National Natural Science Foundation of China [12001455]
  3. NSF CAREER Award [DMS-1848087]
  4. NSF [IIS-1816986, DMS-2022205]

向作者/读者索取更多资源

Deep generative modeling offers low-dimensional parameterizations of image or signal manifolds for recovery algorithms, with linear sample complexity scaling in input dimensionality. An algorithm based on gradient descent is presented under the assumption of a sufficiently expansive neural network generative model with Gaussian weights, providing recovery guarantees for compressive sensing under generative priors.
Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can provide direct low-dimensional parameterizations of the manifold of images or signals belonging to a particular natural class, allowing for recovery algorithms to be posed in a low-dimensional space. This dimensionality may even be lower than the sparsity level of the same signals when viewed in a fixed basis. What is not known about these methods is whether there are computationally efficient algorithms whose sample complexity is optimal in the dimensionality of the representation given by the generative model. In this paper, we present such an algorithm and analysis. Under the assumption that the generative model is a neural network that is sufficiently expansive at each layer and has Gaussian weights, we provide a gradient descent scheme and prove that for noisy compressive measurements of a signal in the range of the model, the algorithm converges to that signal, up to the noise level. The scaling of the sample complexity with respect to the input dimensionality of the generative prior is linear, and thus can not be improved except for constants and factors of other variables. To the best of the authors' knowledge, this is the first recovery guarantee for compressive sensing under generative priors by a computationally efficient algorithm.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.2
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据