4.8 Article

Deep Learning for Image Super-Resolution: A Survey

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2020.2982166

Keywords

Deep learning; Degradation; Animals; Benchmark testing; Measurement; Image super-resolution; deep learning; convolutional neural networks (CNN); Generative adversarial nets (GAN)

Funding

  1. Guangdong special branch plans young talent with scientific and technological innovation [2016TQ03X445]
  2. Guangzhou science and technology planning project [201904010197]
  3. Natural Science Foundation of Guangdong Province, China [2016A030313437]

Ask authors/readers for more resources

This article provides a comprehensive survey on recent advances of image super-resolution using deep learning approaches, categorizing existing studies into supervised, unsupervised, and domain-specific SR techniques, as well as covering benchmark datasets and evaluation metrics. Future directions and open issues in the field are also highlighted for further research.
Image Super-Resolution (SR) is an important class of image processing techniqueso enhance the resolution of images and videos in computer vision. Recent years have witnessed remarkable progress of image super-resolution using deep learning techniques. This article aims to provide a comprehensive survey on recent advances of image super-resolution using deep learning approaches. In general, we can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR. In addition, we also cover some other important issues, such as publicly available benchmark datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future directions and open issues which should be further addressed by the community in the future.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available