Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 41, Issue 2, Pages 297-310Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2018.2794979
Keywords
Computational photography; light field imaging; depth estimation; 3D reconstruction; aberration correction
Funding
- Technology Innovation Program - Ministry of Trade, Industry AMP
- Energy (MOTIE, Korea) [2017-10069072]
- Institute for Information AMP
- communications Technology Promotion - Korea government (MSIT) [2017-0-01780]
- Technology Innovation Program - Ministry of Trade, Industry AMP
- Energy (MI, Korea) [10048320]
- National Research Foundation of Korea (NRF) - Ministry of Education [NRF-2015034617]
Ask authors/readers for more resources
One of the core applications of light field imaging is depth estimation. To acquire a depth map, existing approaches apply a single photo-consistency measure to an entire light field. However, this is not an optimal choice because of the non-uniform light field degradations produced by limitations in the hardware design. In this paper, we introduce a pipeline that automatically determines the best configuration for photo-consistency measure, which leads to the most reliable depth label from the light field. We analyzed the practical factors affecting degradation in lenslet light field cameras, and designed a learning based framework that can retrieve the best cost measure and optimal depth label. To enhance the reliability of our method, we augmented an existing light field benchmark to simulate realistic source dependent noise, aberrations, and vignetting artifacts. The augmented dataset was used for the training and validation of the proposed approach. Our method was competitive with several state-of-the-art methods for the benchmark and real-world light field datasets.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available