4.8 Article

From few to many: Illumination cone models for face recognition under variable lighting and pose

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/34.927464

Keywords

face recognition; image-based rendering; appearance-based vision; face modeling; illumination and pose modeling; lighting; illumination cones; generative models

Ask authors/readers for more resources

We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render-or synthesize-images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose. the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone (based on Euclidean distance within the image space). We test our face recognition method on 4,050 images from the Yale Face Database B; these images contain 405 viewing conditions (9 poses x 45 illumination conditions) for 10 individuals. The method performs almost without error, except on the most extreme lighting directions, and significantly outperforms popular recognition methods that do not use a generative model.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available