Single-trial classification of vowel speech imagery using common spatial patterns

Neural Netw. 2009 Nov;22(9):1334-9. doi: 10.1016/j.neunet.2009.05.008. Epub 2009 May 22.

Abstract

With the goal of providing a speech prosthesis for individuals with severe communication impairments, we propose a control scheme for brain-computer interfaces using vowel speech imagery. Electroencephalography was recorded in three healthy subjects for three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Trial averages revealed readiness potentials at 200 ms after stimulus and speech related potentials peaking after 350 ms. Spatial filters optimized for task discrimination were designed using the common spatial patterns method, and the resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68% to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Algorithms
  • Brain / physiology*
  • Electroencephalography / methods*
  • Female
  • Humans
  • Imagination / physiology*
  • Language
  • Male
  • Nonlinear Dynamics
  • Phonetics*
  • Signal Processing, Computer-Assisted*
  • Speech / physiology
  • Time Factors
  • User-Computer Interface*