Format

Send to

Choose Destination
See comment in PubMed Commons below
Front Psychol. 2013 Aug 20;4:515. doi: 10.3389/fpsyg.2013.00515. eCollection 2013.

Modeling language and cognition with deep unsupervised learning: a tutorial overview.

Author information

1
Computational Cognitive Neuroscience Lab, Department of General Psychology, University of Padova Padova, Italy ; IRCCS San Camillo Neurorehabilitation Hospital Venice-Lido, Italy.

Abstract

Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.

KEYWORDS:

connectionist modeling; deep learning; hierarchical generative models; neural networks; unsupervised learning; visual word recognition

PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Frontiers Media SA Icon for PubMed Central
    Loading ...
    Support Center