Journal
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Volume 42, Issue 10, Pages 2720-2734Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2019.2955459
Keywords
Lighting; Shape; Geometry; Sensors; Image color analysis; Cameras; Color; Depth enhancement; intrinsic decomposition; shape from shading
Funding
- USDA grant [2018-67021-27416]
- US NFS [IIP-1543172]
- Chinese National Key RD project [2017YFB1002803]
- NSFC [61972321]
- Innovation Chain of Shaanxi Province Industrial Area [2017 ZDXM-GY-094]
- NSERC Discovery Grant [RGPIN-2019-04575]
- University of Alberta-Huawei Joint Innovation collaboration grant [201902]
Ask authors/readers for more resources
This article presents a novel approach for depth map enhancement from an RGB-D video sequence. The basic idea is to exploit the photometric information in the color sequence to resolve the inherent ambiguity of shape from shading problem. Instead of making any assumption about surface albedo or controlled object motion and lighting, we use the lighting variations introduced by casual object movement. We are effectively calculating photometric stereo from a moving object under natural illuminations. One of the key technical challenges is to establish correspondences over the entire image set. We, therefore, develop a lighting insensitive robust pixel matching technique that out-performs optical flow method in presence of lighting variations. An adaptive reference frame selection procedure is introduced to get more robust to imperfect lambertian reflections. In addition, we present an expectation-maximization framework to recover the surface normal and albedo simultaneously, without any regularization term. We have validated our method on both synthetic and real datasets to show its superior performance on both surface details recovery and intrinsic decomposition.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available