Format

Send to

Choose Destination
Version 2. F1000Res. 2017 Jul 25 [revised 2017 Oct 11];6:1222. doi: 10.12688/f1000research.12130.2. eCollection 2017.

Logarithmic distributions prove that intrinsic learning is Hebbian.

Author information

1
Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA.

Abstract

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

KEYWORDS:

Hebbian learning; intrinsic excitability; lognormal distributions.; neural circuits; neural coding; neural networks; rate coding; spike frequency; synaptic weights

Supplemental Content

Full text links

Icon for F1000 Research Ltd Icon for PubMed Central Icon for ModelDB
Loading ...
Support Center