The dynamics of multimodal integration: The averaging diffusion model

Psychon Bull Rev. 2017 Dec;24(6):1819-1843. doi: 10.3758/s13423-017-1255-2.

Abstract

We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

Keywords: Averaging diffusion model; Bayesian estimation; Cognitive modeling; Multimodal integration.

MeSH terms

  • Adult
  • Auditory Perception / physiology*
  • Humans
  • Models, Theoretical*
  • Psychomotor Performance / physiology*
  • Visual Perception / physiology*