3.8 Proceedings Paper

Space-Time Distillation for Video Super-Resolution

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00215

Keywords

-

Funding

  1. National Key R&D Program of China [2017YFA0700800]
  2. National Natural Science Foundation of China [61901433]
  3. USTC Research Funds of the Double First-Class Initiative [YD2100002003]

Ask authors/readers for more resources

This paper proposes a knowledge distillation approach to transfer knowledge from complex VSR networks to compact ones, improving the performance of compact VSR networks. By utilizing spatial and temporal knowledge, the proposed method significantly reduces the performance gap between complex and compact models in VSR tasks.
Compact video super-resolution (VSR) networks can be easily deployed on resource-limited devices, e.g., smartphones and wearable devices, but have considerable performance gaps compared with complicated VSR networks that require a large amount of computing resources. In this paper, we aim to improve the performance of compact VSR networks without changing their original architectures, through a knowledge distillation approach that transfers knowledge from a complicated VSR network to a compact one. Specifically, we propose a space-time distillation (STD) scheme to exploit both spatial and temporal knowledge in the VSR task. For space distillation, we extract spatial attention maps that hint the high-frequency video content from both networks, which are further used for transferring spatial modeling capabilities. For time distillation, we narrow the performance gap between compact models and complicated models by distilling the feature similarity of the temporal memory cells, which are encoded from the sequence of feature maps generated in the training clips using ConvLSTM. During the training process, STD can be easily incorporated into any network without changing the original network architecture. Experimental results on standard benchmarks demonstrate that, in resource-constrained situations, the proposed method notably improves the performance of existing VSR networks without increasing the inference time.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available