<PubmedArticle>
    <MedlineCitation Status="MEDLINE" Owner="NLM">
        <PMID Version="1">23800216</PMID>
        <DateCompleted>
            <Year>2015</Year>
            <Month>04</Month>
            <Day>23</Day>
        </DateCompleted>
        <DateRevised>
            <Year>2014</Year>
            <Month>08</Month>
            <Day>19</Day>
        </DateRevised>
        <Article PubModel="Print-Electronic">
            <Journal>
                <ISSN IssnType="Electronic">1551-6709</ISSN>
                <JournalIssue CitedMedium="Internet">
                    <Volume>38</Volume>
                    <Issue>6</Issue>
                    <PubDate>
                        <Year>2014</Year>
                        <Month>Aug</Month>
                    </PubDate>
                </JournalIssue>
                <Title>Cognitive science</Title>
                <ISOAbbreviation>Cogn Sci</ISOAbbreviation>
            </Journal>
            <ArticleTitle>Where do features come from?</ArticleTitle>
            <Pagination>
                <MedlinePgn>1078-101</MedlinePgn>
            </Pagination>
            <ELocationID EIdType="doi" ValidYN="Y">10.1111/cogs.12049</ELocationID>
            <Abstract>
                <AbstractText>It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors. One of these generative models, the restricted Boltzmann machine (RBM), has no connections between its hidden units and this makes perceptual inference and learning much simpler. More significantly, after a layer of hidden features has been learned, the activities of these features can be used as training data for another RBM. By applying this idea recursively, it is possible to learn a deep hierarchy of progressively more complicated features without requiring any labeled data. This deep hierarchy can then be treated as a feedforward neural network which can be discriminatively fine-tuned using backpropagation. Using a stack of RBMs to initialize the weights of a feedforward neural network allows backpropagation to work effectively in much deeper networks and it leads to much better generalization. A stack of RBMs can also be used to initialize a deep Boltzmann machine that has many hidden layers. Combining this initialization method with a new method for fine-tuning the weights finally leads to the first efficient way of training Boltzmann machines with many hidden layers and millions of weights. </AbstractText>
                <CopyrightInformation>Copyright © 2013 Cognitive Science Society, Inc.</CopyrightInformation>
            </Abstract>
            <AuthorList CompleteYN="Y">
                <Author ValidYN="Y">
                    <LastName>Hinton</LastName>
                    <ForeName>Geoffrey</ForeName>
                    <Initials>G</Initials>
                    <AffiliationInfo>
                        <Affiliation>Department of Computer Science, University of Toronto.</Affiliation>
                    </AffiliationInfo>
                </Author>
            </AuthorList>
            <Language>eng</Language>
            <PublicationTypeList>
                <PublicationType UI="D016428">Journal Article</PublicationType>
            </PublicationTypeList>
            <ArticleDate DateType="Electronic">
                <Year>2013</Year>
                <Month>06</Month>
                <Day>25</Day>
            </ArticleDate>
        </Article>
        <MedlineJournalInfo>
            <Country>United States</Country>
            <MedlineTA>Cogn Sci</MedlineTA>
            <NlmUniqueID>7708195</NlmUniqueID>
            <ISSNLinking>0364-0213</ISSNLinking>
        </MedlineJournalInfo>
        <CitationSubset>IM</CitationSubset>
        <MeshHeadingList>
            <MeshHeading>
                <DescriptorName UI="D001185" MajorTopicYN="N">Artificial Intelligence</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D003198" MajorTopicYN="N">Computer Simulation</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D007858" MajorTopicYN="Y">Learning</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D008959" MajorTopicYN="Y">Models, Neurological</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D016571" MajorTopicYN="Y">Neural Networks (Computer)</DescriptorName>
            </MeshHeading>
        </MeshHeadingList>
        <KeywordList Owner="NOTNLM">
            <Keyword MajorTopicYN="N">Backpropagation</Keyword>
            <Keyword MajorTopicYN="N">Boltzmann machines</Keyword>
            <Keyword MajorTopicYN="N">Contrastive divergence</Keyword>
            <Keyword MajorTopicYN="N">Deep learning</Keyword>
            <Keyword MajorTopicYN="N">Distributed representations</Keyword>
            <Keyword MajorTopicYN="N">Learning features</Keyword>
            <Keyword MajorTopicYN="N">Learning graphical models</Keyword>
            <Keyword MajorTopicYN="N">Variational learning</Keyword>
        </KeywordList>
    </MedlineCitation>
    <PubmedData>
        <History>
            <PubMedPubDate PubStatus="received">
                <Year>2010</Year>
                <Month>10</Month>
                <Day>11</Day>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="revised">
                <Year>2012</Year>
                <Month>05</Month>
                <Day>22</Day>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="accepted">
                <Year>2012</Year>
                <Month>07</Month>
                <Day>10</Day>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="entrez">
                <Year>2013</Year>
                <Month>6</Month>
                <Day>27</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="pubmed">
                <Year>2013</Year>
                <Month>6</Month>
                <Day>27</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="medline">
                <Year>2015</Year>
                <Month>4</Month>
                <Day>24</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
        </History>
        <PublicationStatus>ppublish</PublicationStatus>
        <ArticleIdList>
            <ArticleId IdType="pubmed">23800216</ArticleId>
            <ArticleId IdType="doi">10.1111/cogs.12049</ArticleId>
        </ArticleIdList>
    </PubmedData>
</PubmedArticle>