4.5 Article

Weighted statistical binary patterns for facial feature representation

Journal

APPLIED INTELLIGENCE
Volume 52, Issue 2, Pages 1893-1912

Publisher

SPRINGER
DOI: 10.1007/s10489-021-02477-1

Keywords

Local binary patterns; Completed LBP; Statistical moments; Facial feature representation

Funding

  1. Institute of Information & communications Technology Planning & Evaluation (IITP) - Korea government (MSIT) [2019-000231]
  2. Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [2020R1A6A1A03038540]

Ask authors/readers for more resources

A novel framework for efficient and robust facial feature representation, Weighted Statistical Binary Pattern, is proposed, which utilizes a new variance moment containing distinctive facial features and a weighting approach for constructing sign and magnitude components. Through comprehensive evaluation on six public face datasets, the framework outperforms state-of-the-art methods in terms of accuracy.
We present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available