<PubmedArticle>
    <MedlineCitation Status="MEDLINE" Owner="NLM">
        <PMID Version="1">20141471</PMID>
        <DateCompleted>
            <Year>2010</Year>
            <Month>08</Month>
            <Day>09</Day>
        </DateCompleted>
        <DateRevised>
            <Year>2010</Year>
            <Month>05</Month>
            <Day>04</Day>
        </DateRevised>
        <Article PubModel="Print">
            <Journal>
                <ISSN IssnType="Electronic">1530-888X</ISSN>
                <JournalIssue CitedMedium="Internet">
                    <Volume>22</Volume>
                    <Issue>6</Issue>
                    <PubDate>
                        <Year>2010</Year>
                        <Month>Jun</Month>
                    </PubDate>
                </JournalIssue>
                <Title>Neural computation</Title>
                <ISOAbbreviation>Neural Comput</ISOAbbreviation>
            </Journal>
            <ArticleTitle>Learning to represent spatial transformations with factored higher-order Boltzmann machines.</ArticleTitle>
            <Pagination>
                <MedlinePgn>1473-92</MedlinePgn>
            </Pagination>
            <ELocationID EIdType="doi" ValidYN="Y">10.1162/neco.2010.01-09-953</ELocationID>
            <Abstract>
                <AbstractText>To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.</AbstractText>
            </Abstract>
            <AuthorList CompleteYN="Y">
                <Author ValidYN="Y">
                    <LastName>Memisevic</LastName>
                    <ForeName>Roland</ForeName>
                    <Initials>R</Initials>
                    <AffiliationInfo>
                        <Affiliation>Department of Computer Science, University of Toronto, Toronto M5S 3G4, Canada. roland@cs.toronto.edu</Affiliation>
                    </AffiliationInfo>
                </Author>
                <Author ValidYN="Y">
                    <LastName>Hinton</LastName>
                    <ForeName>Geoffrey E</ForeName>
                    <Initials>GE</Initials>
                </Author>
            </AuthorList>
            <Language>eng</Language>
            <PublicationTypeList>
                <PublicationType UI="D016428">Journal Article</PublicationType>
            </PublicationTypeList>
        </Article>
        <MedlineJournalInfo>
            <Country>United States</Country>
            <MedlineTA>Neural Comput</MedlineTA>
            <NlmUniqueID>9426182</NlmUniqueID>
            <ISSNLinking>0899-7667</ISSNLinking>
        </MedlineJournalInfo>
        <CitationSubset>IM</CitationSubset>
        <MeshHeadingList>
            <MeshHeading>
                <DescriptorName UI="D000465" MajorTopicYN="N">Algorithms</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D001185" MajorTopicYN="Y">Artificial Intelligence</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D007091" MajorTopicYN="N">Image Processing, Computer-Assisted</DescriptorName>
                <QualifierName UI="Q000379" MajorTopicYN="Y">methods</QualifierName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D055641" MajorTopicYN="N">Mathematical Concepts</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D016571" MajorTopicYN="Y">Neural Networks (Computer)</DescriptorName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D010363" MajorTopicYN="N">Pattern Recognition, Automated</DescriptorName>
                <QualifierName UI="Q000379" MajorTopicYN="Y">methods</QualifierName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D010364" MajorTopicYN="N">Pattern Recognition, Visual</DescriptorName>
                <QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
            </MeshHeading>
            <MeshHeading>
                <DescriptorName UI="D013028" MajorTopicYN="N">Space Perception</DescriptorName>
                <QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
            </MeshHeading>
        </MeshHeadingList>
    </MedlineCitation>
    <PubmedData>
        <History>
            <PubMedPubDate PubStatus="entrez">
                <Year>2010</Year>
                <Month>2</Month>
                <Day>10</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="pubmed">
                <Year>2010</Year>
                <Month>2</Month>
                <Day>10</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
            <PubMedPubDate PubStatus="medline">
                <Year>2010</Year>
                <Month>8</Month>
                <Day>10</Day>
                <Hour>6</Hour>
                <Minute>0</Minute>
            </PubMedPubDate>
        </History>
        <PublicationStatus>ppublish</PublicationStatus>
        <ArticleIdList>
            <ArticleId IdType="pubmed">20141471</ArticleId>
            <ArticleId IdType="doi">10.1162/neco.2010.01-09-953</ArticleId>
        </ArticleIdList>
    </PubmedData>
</PubmedArticle>