4.7 Article

Light Field Spatial Super-Resolution Using Deep Efficient Spatial-Angular Separable Convolution

期刊

IEEE TRANSACTIONS ON IMAGE PROCESSING
卷 28, 期 5, 页码 2319-2330

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2018.2885236

关键词

Light field; super-resolution; convolutional neural networks

资金

  1. CityU Start-up Grant for New Faculty [7200537/CS]
  2. Hong Kong RGC Early Career Scheme Funds [9048123 (CityU 21211518)]
  3. Natural Science Foundation of China [61873142]

向作者/读者索取更多资源

Light field (LF) photography is an emerging paradigm for capturing more immersive representations of the real world. However, arising from the inherent tradeoff between the angular and spatial dimensions, the spatial resolution of LF images captured by commercial micro-lens-based LF cameras is significantly constrained. In this paper, we propose effective and efficient end-to-end convolutional neural network models for spatially super-resolving LF images. Specifically, the proposed models have an hourglass shape, which allows feature extraction to be performed at the low-resolution level to save both the computational and memory costs. To fully make use of the 4D structure information of LF data in both the spatial and angular domains, we propose to use 4D convolution to characterize the relationship among pixels. Moreover, as an approximation of 4D convolution, we also propose to use spatial-angular separable (SAS) convolutions for more computationally and memory-efficient extraction of spatial-angular joint features. Extensive experimental results on 57 test LF images with various challenging natural scenes show significant advantages from the proposed models over the state-of-the-art methods. That is, an average PSNR gain of more than 3.0 dB and better visual quality are achieved, and our methods preserve the LF structure of the super-resolved LF images better, which is highly desirable for subsequent applications. In addition, the SAS convolution-based model can achieve three times speed up with only negligible reconstruction quality decrease when compared with the 4D convolutionbased one. The source code of our method is available online.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据