Format

Send to

Choose Destination
J Chem Phys. 2012 Feb 14;136(6):064108. doi: 10.1063/1.3681941.

Markov processes follow from the principle of maximum caliber.

Author information

1
Beijing International Center for Mathematical Research and Biodynamic Optical Imaging Center, Peking University, Beijing 100871, People's Republic of China. haoge@pku.edu.cn

Abstract

Markov models are widely used to describe stochastic dynamics. Here, we show that Markov models follow directly from the dynamical principle of maximum caliber (Max Cal). Max Cal is a method of deriving dynamical models based on maximizing the path entropy subject to dynamical constraints. We give three different cases. First, we show that if constraints (or data) are given in the form of singlet statistics (average occupation probabilities), then maximizing the caliber predicts a time-independent process that is modeled by identical, independently distributed random variables. Second, we show that if constraints are given in the form of sequential pairwise statistics, then maximizing the caliber dictates that the kinetic process will be Markovian with a uniform initial distribution. Third, if the initial distribution is known and is not uniform we show that the only process that maximizes the path entropy is still the Markov process. We give an example of how Max Cal can be used to discriminate between different dynamical models given data.

PMID:
22360170
PMCID:
PMC3292588
DOI:
10.1063/1.3681941
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for American Institute of Physics Icon for PubMed Central
Loading ...
Support Center