Ear recognition from one sample per person

PLoS One. 2015 May 29;10(5):e0129505. doi: 10.1371/journal.pone.0129505. eCollection 2015.

Abstract

Biometrics has the advantages of efficiency and convenience in identity authentication. As one of the most promising biometric-based methods, ear recognition has received broad attention and research. Previous studies have achieved remarkable performance with multiple samples per person (MSPP) in the gallery. However, most conventional methods are insufficient when there is only one sample per person (OSPP) available in the gallery. To solve the OSPP problem by maximizing the use of a single sample, this paper proposes a hybrid multi-keypoint descriptor sparse representation-based classification (MKD-SRC) ear recognition approach based on 2D and 3D information. Because most 3D sensors capture 3D data accessorizing the corresponding 2D data, it is sensible to use both types of information. First, the ear region is extracted from the profile. Second, keypoints are detected and described for both the 2D texture image and 3D range image. Then, the hybrid MKD-SRC algorithm is used to complete the recognition with only OSPP in the gallery. Experimental results on a benchmark dataset have demonstrated the feasibility and effectiveness of the proposed method in resolving the OSPP problem. A Rank-one recognition rate of 96.4% is achieved for a gallery of 415 subjects, and the time involved in the computation is satisfactory compared to conventional methods.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Biometric Identification / methods*
  • Ear / anatomy & histology*
  • Female
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Male
  • Predictive Value of Tests

Grants and funding

This article is supported by the National Natural Science Foundation of China (Grant No. 61170116), the National Natural Science Foundation of China (Grant No. 61472031), the National Natural Science Foundation of China (Grant No. 61375010) and the National Natural Science Foundation of China (Grant No. 61300075). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.