Format

Send to

Choose Destination
Brain Res. 2015 May 5;1606:54-67. doi: 10.1016/j.brainres.2015.02.026. Epub 2015 Feb 23.

Auditory feedback in error-based learning of motor regularity.

Author information

1
Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team CNRS-UMR 5292, INSERM U1028, University Lyon-1, Lyon, France; Institute of Music Physiology and Musicians' Medicine, University of Music, Drama, and Media, Hanover, Germany. Electronic address: f.t.vanvugt@gmail.com.
2
Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team CNRS-UMR 5292, INSERM U1028, University Lyon-1, Lyon, France.

Abstract

Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications.

KEYWORDS:

Action–perception coupling; Auditory feedback; Feedback error-based learning; Motor learning; Movement variability; Music; Timing

PMID:
25721795
DOI:
10.1016/j.brainres.2015.02.026
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center