Format

Send to

Choose Destination
Neural Comput. 2010 Apr;22(4):998-1024. doi: 10.1162/neco.2009.11-08-912.

A continuous entropy rate estimator for spike trains using a K-means-based context tree.

Author information

1
Revelle College, University of California, San Diego, La Jolla, CA 92092, USA. wulin@ucsd.edu

Abstract

Entropy rate quantifies the change of information of a stochastic process (Cover & Thomas, 2006). For decades, the temporal dynamics of spike trains generated by neurons has been studied as a stochastic process (Barbieri, Quirk, Frank, Wilson, & Brown, 2001; Brown, Frank, Tang, Quirk, & Wilson, 1998; Kass & Ventura, 2001; Metzner, Koch, Wessel, & Gabbiani, 1998; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998). We propose here to estimate the entropy rate of a spike train from an inhomogeneous hidden Markov model of the spike intervals. The model is constructed by building a context tree structure to lay out the conditional probabilities of various subsequences of the spike train. For each state in the Markov chain, we assume a gamma distribution over the spike intervals, although any appropriate distribution may be employed as circumstances dictate. The entropy and confidence intervals for the entropy are calculated from bootstrapping samples taken from a large raw data sequence. The estimator was first tested on synthetic data generated by multiple-order Markov chains, and it always converged to the theoretical Shannon entropy rate (except in the case of a sixth-order model, where the calculations were terminated before convergence was reached). We also applied the method to experimental data and compare its performance with that of several other methods of entropy estimation.

PMID:
19922298
DOI:
10.1162/neco.2009.11-08-912
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Atypon
Loading ...
Support Center