Markov chain: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Brian Napoletano
(Basic introduction to Markov chains)
 
imported>Brian Napoletano
mNo edit summary
Line 10: Line 10:
|}
|}


In this model, <math>E</math> is any desired subset of the series <math> \left\{ 0, \ldots, n-1 \right\} </math>. These <math>t</math>indexes commonly represent the time component, and the range of <math>X^{ \left( t \right) }</math> is the Markov chain's state space <ref name="Neal1993" />.
In this model, <math>E</math> is any desired subset of the series <math> \left\{ 0, \ldots, n-1 \right\} </math>. These <math>t</math> indexes commonly represent the time component, and the range of <math>X^{ \left( t \right) }</math> is the Markov chain's ''state space'' <ref name="Neal1993" />.


==Probability Density==
==Probability Density==
Line 17: Line 17:
{|border="0" cellpadding="5"
{|border="0" cellpadding="5"
|valign="middle" align="left"|     
|valign="middle" align="left"|     
|valign="middle" align="left"|<math>p_{\left(n+1\right)}\left(y\right) = \sum_{x} p_{\left(n\right)}\left(x\right} p\left(y \mid x\right)</math>
|valign="middle" align="left"|<math>p_{\left(n+1\right)}\left(y\right) = \sum_{x} p_{\left(n\right)}\left(x\right) p\left(y \mid x\right)</math>
|valign="middle" align="right"|''Eq. 2''
|valign="middle" align="right"|''Eq. 2''
|}
|}
Line 24: Line 24:


==Invariant Distributions==
==Invariant Distributions==
In many cases, the density will approach a limit <math>p\left(y\right)</math> that is uniquely determined by <math>p\left(y \mid x\right)</math> (and not <math>p_{\left(n\right)}\left(x\right)</math>). This limiting distribution is referred to as the ''invariant'' (or ''stationary'') distribution over the states of the Markov chain. When such a distribution is reached, it persists forever.
In many cases, the density will approach a limit <math>p\left(y\right)</math> that is uniquely determined by <math>p\left(y \mid x\right)</math> (and not <math>p_{\left(n\right)}\left(x\right)</math>). This limiting distribution is referred to as the ''invariant'' (or ''stationary'') distribution over the states of the Markov chain. When such a distribution is reached, it persists forever<ref name="Lee2004" />.


==References==
==References==

Revision as of 18:34, 8 October 2007

A Markov chain is a Markov Process with a discrete time parameter [1]. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time is solely a function of the state , and not of any previous states [2].

A Formal Model

The influence of the values of on the distribution of can be formally modelled as:

Eq. 1

In this model, is any desired subset of the series . These indexes commonly represent the time component, and the range of is the Markov chain's state space [1].

Probability Density

The Markov chain can also be specified using a series of probabilities. If the initial probability of the state is , then the transition probability for state occuring at time can be expressed as:

Eq. 2

In words, this states that the probability of the system entering state at time <mat>n + 1</math> is a function of the summed products of the initial probability density and the probability of state given state [2].

Invariant Distributions

In many cases, the density will approach a limit that is uniquely determined by (and not ). This limiting distribution is referred to as the invariant (or stationary) distribution over the states of the Markov chain. When such a distribution is reached, it persists forever[2].

References

  1. 1.0 1.1 Neal, R.M. (1993) Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html
  2. 2.0 2.1 2.2 Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.