4.8 Article

Depth Estimation with Occlusion Modeling Using Light-Field Cameras

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2016.2515615

Keywords

Light-fields; 3D reconstruction; occlusion detection

Funding

  1. ONR [N00014-15-1-2013]
  2. Berkeley Fellowship
  3. Intel
  4. Draper
  5. Nokia
  6. Sony

Ask authors/readers for more resources

Light-field cameras have become widely available in both consumer and industrial applications. However, most previous approaches do not model occlusions explicitly, and therefore fail to capture sharp object boundaries. A common assumption is that for a Lambertian scene, a pixel will exhibit photo-consistency, which means all viewpoints converge to a single point when focused to its depth. However, in the presence of occlusions this assumption fails to hold, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, an occlusion-aware depth estimation algorithm is developed; the method also enables identification of occlusion edges, which may be useful in other applications. It can be shown that although photo-consistency is not preserved for pixels at occlusions, it still holds in approximately half the viewpoints. Moreover, the line separating the two view regions (occluded object versus occluder) has the same orientation as that of the occlusion edge in the spatial domain. By ensuring photo-consistency in only the occluded view region, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available