4.5 Article

Super-resolution guided knowledge distillation for low-resolution image classification

Journal

PATTERN RECOGNITION LETTERS
Volume 155, Issue -, Pages 62-68

Publisher

ELSEVIER
DOI: 10.1016/j.patrec.2022.02.006

Keywords

Low-resolution image classification; Super-resolution; Knowledge distillation

Funding

  1. Beijing Natural Science Foundation [L211015, M22022]
  2. China Postdoctoral Science Foundation [2021M690339]
  3. National Natural Science Foundation of China [62106017, 61906013]
  4. Fundamental Research Funds for the Central Universities [2019JBZ104, 2019RC031, 2021RC266]

Ask authors/readers for more resources

This paper proposes a Super-Resolution guided Knowledge Distillation (SRKD) framework to address the challenge of low-resolution image classification. By enhancing the features of low-resolution images and minimizing the difference between the features of high-resolution images and super-resolution images, the proposed method achieves significant improvement in experimental results.
With the development of deep convolutional neural networks, the high-resolution image classification has achieved excellent classification results. However, in natural scenes, low-resolution images are very common, such as images taken by a webcam or images taken with a lens far away from the target object. Low-resolution image classification is a very difficult problem, because low-resolution images have small size and contain fewer discriminative features, which lead to a sharp decline in classification performance. In order to solve the above problem, this paper proposes a Super-Resolution guided Knowledge Distillation (SRKD) framework, which consists of two sub-networks: one is the super-resolution sub-network used to enhance the features of low-resolution images, and the other is the knowledge distillation sub-network used to minimize the difference between the features of high-resolution images and the features of the images output by the super-resolution sub-network. Extensive experiments on the Pascal VOC 2007 and CUB-200-2011 datasets show that the proposed method has a great improvement compared to the benchmark which is trained on high-resolution images. Especially in the case of very low resolution, the proposed method improves the mAP on Pascal VOC 2007 test set by 30.4% and improves the classification accuracy on CUB-200-2011 test set by 60.37% compared with the benchmark model. (C) 2022 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available