4.7 Article

Occlusion-aware light field depth estimation with view attention

Journal

OPTICS AND LASERS IN ENGINEERING
Volume 160, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.optlaseng.2022.107299

Keywords

Light field; Depth estimation; Convolutional neural network; Multi-view stereo matching; Occlusion model

Categories

Ask authors/readers for more resources

A two-stage attention-based occlusion-aware light field depth estimation network is proposed in this study, which can achieve accurate depth estimation in occluded regions and ranks first in the 4D light field benchmark.
Depth estimation for light field images is crucial in light field applications such as image-based rendering and refocusing. Previous learning-based methods com bining neural network with cost volume can achieve accurate depth estimation but fail in regions with occlusion. In this paper, a two-stage attention-based occlusion-aware light field depth estimation network is proposed. In the initial depth estimation stage, the sub-aperture images are divided into four groups based on different view directions and four initial cost volumes are constructed using the feature maps from each group to aggregate initial depth maps. Then in the refined depth estimation stage, the four aggregated volumes from the initial stage are fused into one based on the view attention, where features of views with less occlusion are highly weighted to provide more effective information. Experiment results demonstrate that the proposed method can accomplish robust and accurate depth estimation in the presence of occlusion, which ranks the first place on 4D light field benchmark in terms of most accuracy metrics.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available