Format

Send to

Choose Destination
Neuropsychologia. 2018 Oct;119:223-232. doi: 10.1016/j.neuropsychologia.2018.08.014. Epub 2018 Aug 22.

A common representation of time across visual and auditory modalities.

Author information

1
Centro de Matemática, Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Arcturus, 03. Bairro Jardim Antares, São Bernardo do Campo, CEP 09606-070 SP, Brazil.
2
Centro de Matemática, Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Arcturus, 03. Bairro Jardim Antares, São Bernardo do Campo, CEP 09606-070 SP, Brazil. Electronic address: andre.cravo@ufabc.edu.br.

Abstract

Humans' and non-human animals' ability to process time on the scale of milliseconds and seconds is essential for adaptive behaviour. A central question of how brains keep track of time is how specific temporal information across different sensory modalities is. In the present study, we show that encoding of temporal intervals in auditory and visual modalities are qualitatively similar. Human participants were instructed to reproduce intervals in the range from 750 ms to 1500 ms marked by auditory or visual stimuli. Our behavioural results suggest that, although participants were more accurate in reproducing intervals marked by auditory stimuli, there was a strong correlation in performance between modalities. Using multivariate pattern analysis in scalp EEG, we show that activity during late periods of the intervals was similar within and between modalities. Critically, we show that a multivariate pattern classifier was able to accurately predict the elapsed interval, even when trained on an interval marked by a stimulus of a different sensory modality. Taken together, our results suggest that, while there are differences in the processing of intervals marked by auditory and visual stimuli, they also share a common neural representation.

KEYWORDS:

Audition; EEG; Multivariate pattern analysis; Time perception; Vision

[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center