4.6 Article

Local feature fusion and SRC-based decision fusion for ear recognition

Journal

MULTIMEDIA SYSTEMS
Volume 28, Issue 3, Pages 1117-1134

Publisher

SPRINGER
DOI: 10.1007/s00530-022-00906-w

Keywords

AGCWD-based preprocessing; Feature extraction; Feature selection; FDDL-based SRC

Funding

  1. National Natural Science Foundation of China [61201421]
  2. National cryosphere desert data center [E01Z7902]
  3. Chinese Academy of Sciences [Y9298302]

Ask authors/readers for more resources

In this paper, we propose a fusion-based method for human ear recognition, which consists of preprocessing, feature extraction, and classification decision. The experimental results demonstrate the superiority of our algorithm in terms of accuracy on six commonly used datasets.
As an emerging biometric technology, human ear recognition has important applications in crime tracking, forensic identification and other fields. In the paper, we propose an effective fusion-based human ear recognition method and describe the algorithm based on three parts: preprocessing, feature extraction and classification decision. First, we employ a weighted distributed adaptive gamma correction (AGCWD)-based image enhancement method for the preprocessing operation. Features are extracted by fusing dense scale invariant feature transform (DSIFT), local binary patterns (LBP) and histogram of gradient directions (HoG), after which we apply two sparse representation-based feature selection methods, namely robust sparse linear discriminant analysis (RSLDA) and inter-class sparsity-based discriminant least square regression (ICS-DLSR), to improve the computational speed. Finally, the two selection features are classified separately using the FDDL-based SRC scheme (FDDL-based SRC), and the two sets of classification results are fused at the decision level to obtain the final decision results. Our algorithm is tested on six commonly used datasets (USTB1, USTB2, USTB3, IITD1, AMI and AWE) and obtained the accuracy of 99.44%, 97.08%, 100%, 100%, 98.14% and 82.90%. The experiments show the superiority of our algorithm compared with other algorithms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available