Speech-cue transmission by an algorithm to increase consonant recognition in noise for hearing-impaired listeners

J Acoust Soc Am. 2014 Dec;136(6):3325. doi: 10.1121/1.4901712.

Abstract

Consonant recognition was assessed following extraction of speech from noise using a more efficient version of the speech-segregation algorithm described in Healy, Yoho, Wang, and Wang [(2013) J. Acoust. Soc. Am. 134, 3029-3038]. Substantial increases in recognition were observed following algorithm processing, which were significantly larger for hearing-impaired (HI) than for normal-hearing (NH) listeners in both speech-shaped noise and babble backgrounds. As observed previously for sentence recognition, older HI listeners having access to the algorithm performed as well or better than young NH listeners in conditions of identical noise. It was also found that the binary masks estimated by the algorithm transmitted speech features to listeners in a fashion highly similar to that of the ideal binary mask (IBM), suggesting that the algorithm is estimating the IBM with substantial accuracy. Further, the speech features associated with voicing, manner of articulation, and place of articulation were all transmitted with relative uniformity and at relatively high levels, indicating that the algorithm and the IBM transmit speech cues without obvious deficiency. Because the current implementation of the algorithm is much more efficient, it should be more amenable to real-time implementation in devices such as hearing aids and cochlear implants.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Aged
  • Algorithms*
  • Audiometry, Pure-Tone
  • Auditory Threshold
  • Cues*
  • Female
  • Hearing Aids*
  • Hearing Loss, Sensorineural / diagnosis
  • Hearing Loss, Sensorineural / rehabilitation*
  • Humans
  • Male
  • Middle Aged
  • Perceptual Masking*
  • Phonetics*
  • Speech Perception*
  • Speech Reception Threshold Test