Format

Send to

Choose Destination
Neural Comput. 2015 Sep;27(9):1825-56. doi: 10.1162/NECO_a_00759. Epub 2015 Jun 16.

Extracting Low-Dimensional Latent Structure from Time Series in the Presence of Delays.

Author information

1
Robotics Institute and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A. karthikl@cs.cmu.edu.
2
Department of Bioengineering, Center for the Neural Basis of Cognition, and Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A. patrick.t.sadtler@gmail.com.
3
Department of Neurological Surgery, Department of Bioengineering, and Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A. Elizabeth.Tyler-Kabara@chp.edu.
4
Department of Bioengineering, Center for the Neural Basis of Cognition, and Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA 15261, U.S.A. apb10@pitt.edu.
5
Department of Electrical Engineering and Computer Engineering, Department of Biomedical Engineering, and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A. byronyu@cmu.edu.

Abstract

Noisy, high-dimensional time series observations can often be described by a set of low-dimensional latent variables. Commonly used methods to extract these latent variables typically assume instantaneous relationships between the latent and observed variables. In many physical systems, changes in the latent variables manifest as changes in the observed variables after time delays. Techniques that do not account for these delays can recover a larger number of latent variables than are present in the system, thereby making the latent representation more difficult to interpret. In this work, we introduce a novel probabilistic technique, time-delay gaussian-process factor analysis (TD-GPFA), that performs dimensionality reduction in the presence of a different time delay between each pair of latent and observed variables. We demonstrate how using a gaussian process to model the evolution of each latent variable allows us to tractably learn these delays over a continuous domain. Additionally, we show how TD-GPFA combines temporal smoothing and dimensionality reduction into a common probabilistic framework. We present an expectation/conditional maximization either (ECME) algorithm to learn the model parameters. Our simulations demonstrate that when time delays are present, TD-GPFA is able to correctly identify these delays and recover the latent space. We then applied TD-GPFA to the activity of tens of neurons recorded simultaneously in the macaque motor cortex during a reaching task. TD-GPFA is able to better describe the neural activity using a more parsimonious latent space than GPFA, a method that has been used to interpret motor cortex data but does not account for time delays. More broadly, TD-GPFA can help to unravel the mechanisms underlying high-dimensional time series data by taking into account physical delays in the system.

PMID:
26079746
PMCID:
PMC4545403
DOI:
10.1162/NECO_a_00759
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Atypon Icon for PubMed Central
Loading ...
Support Center