4.6 Article

View position prior-supervised light field angular super-resolution network with asymmetric feature extraction and spatial-angular interaction

期刊

NEUROCOMPUTING
卷 518, 期 -, 页码 206-218

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2022.10.043

关键词

Light field; Angular super -resolution; Asymmetric feature extraction; Spatial -angular interaction; View position prior

向作者/读者索取更多资源

This paper proposes a view position prior-supervised light field angular super-resolution network to improve the trade-off between angular and spatial resolution in light field imaging. The asymmetric feature extraction block and spatial-angular interaction module are introduced to enhance feature extraction and establish view correlations. Experimental results demonstrate the superiority of the proposed method in various datasets.
Light field imaging can record the intensity and direction information of light rays in space, which attracts extensive attention. However, the trade-off between angular and spatial resolution cannot be avoided due to the limitation of sensors in commercial light field cameras. To mitigate this problem, this paper proposes the view position prior-supervised light field angular super-resolution network with asymmetric feature extraction and spatial-angular interaction. First, there is a severe information asymmetry between the spatial and angular dimensions in light fields with sparse views. The asymmetric feature extraction block is proposed to extract spatial and angular features with different receptive fields in an asymmetric manner. As a result, more light field intrinsic features are extracted, which improves the utilization rate of the limited light field information. Second, the existing methods usually ignore the correlations among the newly synthesized views. The spatial-angular interaction module is proposed to collect the local and global information, build relations between any two points in the feature space, and reconstruct light field consistencies. Thus, the complete view correlations can be established. Last but not least, we investigate the impact of the given views on each viewpoint and propose a loss function based on the view position prior, which reduces the quality difference among the synthesized subaperture images and further improves the network performance. Comprehensive experiments demonstrate that our method can perform best on all datasets, and the depth estimation results present a new perspective to show the superiority of the proposed method. (c) 2022 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据