3.8 Proceedings Paper

Light Field Super-Resolution with Zero-Shot Learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00988

Keywords

-

Funding

  1. National Key RAMP
  2. D Program of China [2017YFA0700800]
  3. Natural Science Foundation of China [U19B2038]

Ask authors/readers for more resources

A zero-shot learning framework is proposed to address the domain gap in light field super-resolution caused by different acquisition conditions, with simple and efficient CNNs tackling three sub-tasks separately. The new method outperforms classic non-learning methods and shows better generalization to unseen light fields when the domain gap is large.
Deep learning provides a new avenue for light field super-resolution (SR). However, the domain gap caused by drastically different light field acquisition conditions poses a main obstacle in practice. To fill this gap, we propose a zero-shot learning framework for light field SR, which learns a mapping to super-resolve the reference view with examples extracted solely from the input low-resolution light field itself. Given highly limited training data under the zero-shot setting, however, we observe that it is difficult to train an end-to-end network successfully. Instead, we divide this challenging task into three sub-tasks, i.e., pre-upsampling, view alignment, and multi-view aggregation, and then conquer them separately with simple yet efficient CNNs. Moreover, the proposed framework can be readily extended to finetune the pre-trained model on a source dataset to better adapt to the target input, which further boosts the performance of light field SR in the wild. Experimental results validate that our method not only outperforms classic non-learning-based methods, but also generalizes better to unseen light fields than state-of-the-art deep-learning-based methods when the domain gap is large.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available