Does visual word identification involve a sub-phonemic level?

Cognition. 2001 Mar;78(3):B41-52. doi: 10.1016/s0010-0277(00)00121-9.

Abstract

The phonological codes activated in visual word recognition can be thought of minimally as strings of discrete and unstructured phoneme-like units. We asked whether these codes might additionally express a letter string's phonological form at a featural or gestural level. Specifically, we asked whether the priming of a word (e.g. sea, film, basic) by a rhyming non-word would depend on the non-word's phonemic-feature similarity to the word. The question was asked within a mask--prime--target--mask sequence with both brief (57 ms in Experiments 1 and 2) and long (486 ms in Experiment 1) prime durations. Non-word primes that differed from their targets by a single phonemic feature (initial voicing as in ZEA, VILM, PASIC) led to faster target lexical decisions than non-word primes that differed by more than a single phonemic feature (e.g. VEA, JILM, SASIC). Visual word recognition seems to involve a sub-phonemic level of processing.

Publication types

  • Clinical Trial
  • Randomized Controlled Trial
  • Research Support, U.S. Gov't, P.H.S.

MeSH terms

  • Adolescent
  • Adult
  • Attention
  • Female
  • Humans
  • Male
  • Phonetics*
  • Psycholinguistics
  • Reaction Time
  • Reading*
  • Verbal Learning*