Sign language recognition by means of common spatial patterns: An analysis

PLoS One. 2022 Oct 31;17(10):e0276941. doi: 10.1371/journal.pone.0276941. eCollection 2022.

Abstract

Currently there are around 466 million hard of hearing people and this amount is expected to grow in the coming years. Despite the efforts that have been made, there is a communication barrier between deaf and hard of hearing signers and non-signers in environments without an interpreter. Different approaches have been developed lately to try to deal with this issue. In this work, we present an Argentinian Sign Language (LSA) recognition system which uses hand landmarks extracted from videos of the LSA64 dataset in order to distinguish between different signs. Different features are extracted from the signals created with the hand landmarks values, which are first transformed by the Common Spatial Patterns (CSP) algorithm. CSP is a dimensionality reduction algorithm and it has been widely used for EEG systems. The features extracted from the transformed signals have been then used to feed different classifiers, such as Random Forest (RF), K-Nearest Neighbors (KNN) or Multilayer Perceptron (MLP). Several experiments have been performed from which promising results have been obtained, achieving accuracy values between 0.90 and 0.95 on a set of 42 signs.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Deafness*
  • Humans
  • Recognition, Psychology
  • Sign Language*

Grants and funding

This work has been partially funded by: - The Basque Government (https://www.euskadi.eus/gobierno-vasco/inicio/), Spain, grant number IT1427-22. - The Spanish Ministry of Science (MCIU)(https://www.ciencia.gob.es/), grant number PID2021-122402OB-C21. - The State Research Agency (AEI)(https://www.ciencia.gob.es/portal/site/MICINN/aei), grant number PID2021-122402OB-C21. - The European Regional Development Fund (FEDER)(https://ec.europa.eu/regional_policy/en/funding/erdf/), grant number PID2021-122402OB-C21. - The Spanish Ministry of Science, Innovation and Universities (https://www.ciencia.gob.es/), FPU18/04737 predoctoral grant for I. Rodríguez-Moreno. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.