Format

Send to

Choose Destination
Annu Rev Neurosci. 2018 Jul 8;41:233-253. doi: 10.1146/annurev-neuro-080317-061948.

Computational Principles of Supervised Learning in the Cerebellum.

Author information

1
Department of Neurobiology, Stanford University School of Medicine, Stanford, California 94305, USA; email: jennifer.raymond@stanford.edu.
2
Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA; email: jfmedina@bcm.edu.

Abstract

Supervised learning plays a key role in the operation of many biological and artificial neural networks. Analysis of the computations underlying supervised learning is facilitated by the relatively simple and uniform architecture of the cerebellum, a brain area that supports numerous motor, sensory, and cognitive functions. We highlight recent discoveries indicating that the cerebellum implements supervised learning using the following organizational principles: ( a) extensive preprocessing of input representations (i.e., feature engineering), ( b) massively recurrent circuit architecture, ( c) linear input-output computations, ( d) sophisticated instructive signals that can be regulated and are predictive, ( e) adaptive mechanisms of plasticity with multiple timescales, and ( f) task-specific hardware specializations. The principles emerging from studies of the cerebellum have striking parallels with those in other brain areas and in artificial neural networks, as well as some notable differences, which can inform future research on supervised learning and inspire next-generation machine-based algorithms.

KEYWORDS:

Purkinje cell; climbing fiber; consolidation; decorrelation; machine learning; plasticity

PMID:
29986160
PMCID:
PMC6056176
[Available on 2019-07-08]
DOI:
10.1146/annurev-neuro-080317-061948
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Atypon
Loading ...
Support Center