期刊
APPLIED INTELLIGENCE
卷 52, 期 2, 页码 1893-1912出版社
SPRINGER
DOI: 10.1007/s10489-021-02477-1
关键词
Local binary patterns; Completed LBP; Statistical moments; Facial feature representation
资金
- Institute of Information & communications Technology Planning & Evaluation (IITP) - Korea government (MSIT) [2019-000231]
- Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [2020R1A6A1A03038540]
A novel framework for efficient and robust facial feature representation, Weighted Statistical Binary Pattern, is proposed, which utilizes a new variance moment containing distinctive facial features and a weighting approach for constructing sign and magnitude components. Through comprehensive evaluation on six public face datasets, the framework outperforms state-of-the-art methods in terms of accuracy.
We present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据