Do we use visual codes when information is not presented visually?

Mem Cognit. 2020 Nov;48(8):1522-1536. doi: 10.3758/s13421-020-01054-0.

Abstract

For many years, the working/short-term memory literature has been dominated by the study of phonological codes. Consequently, insufficient attention has been devoted to visual codes. In the present study, we attempt to remedy the situation by exploring a critical aspect of modern models of working memory, namely the principle that responses do not depend primarily on what kinds of materials are presented, but on what kinds of codes are generated from those materials. More specifically, we used the visual similarity effect as a tool to ask whether there is a generation of visual codes when information is not presented visually. In two immediate serial recall experiments, we manipulated the visual similarity (similar words, dissimilar words), the presentation modality (visual presentation, auditory presentation), and concurrent articulation (none, concurrent articulation). We observed a visual similarity effect independent of presentation modality. Comparable results were observed with two different sets of stimuli and with or without concurrent articulation. Thus, for the first time, we demonstrate that, from acoustically presented word lists, visual codes in working/short-term memory are generated, producing a visual similarity effect. It is now clear that the encoding of visual or acoustic presentation to include the opposite type of representation is bidirectional.

Keywords: Immediate serial recall; Presentation modality; Short-term memory; Visual similarity.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention
  • Humans
  • Memory, Short-Term
  • Mental Recall
  • Serial Learning
  • Visual Perception*