Markov chain: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Brian Napoletano
mNo edit summary
mNo edit summary
 
(4 intermediate revisions by 4 users not shown)
Line 1: Line 1:
A '''Markov chain''' is a [[Markov Process]] with a discrete time parameter <ref name="Neal1993">Neal, R.M. (1993)  Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html</ref>. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time <math>\left(t + 1\right)</math> is solely a function of the state <math>t</math>, and not of any previous states <ref name="Lee2004">Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.</ref>.
{{subpages}}
 
A '''Markov chain''' is a [[Markov process]] with a discrete time parameter <ref name="Neal1993">Neal, R.M. (1993)  Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html</ref>. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time <math>\left(t + 1\right)</math> is solely a function of the state <math>t</math>, and not of any previous states <ref name="Lee2004">Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.</ref>.


==A Formal Model==
==A Formal Model==
Line 13: Line 15:


==Probability Density==
==Probability Density==
The Markov chain can also be specified using a series of probabilities. If the initial probability of the state <math>x</math> is <math>p_{\left(0\right)}\left(x\right)</math>, then the transition probability for state <math>y</math> occuring at time <math>n + 1</math> can be expressed as:
The Markov chain can also be specified using a series of probabilities. If the initial probability of the state <math>x</math> is <math>p_{\left(0\right)}\left(x\right)</math>, then the transition probability for state <math>y</math> occurring at time <math>n + 1</math> can be expressed as:


{|border="0" cellpadding="5"
{|border="0" cellpadding="5"
Line 21: Line 23:
|}
|}


In words, this states that the probability of the system entering state <math>y</math> at time <mat>n + 1</math> is a function of the summed products of the initial probability density and the probability of state <math>y</math> given state <math>x</math> <ref name="Lee2004" />.
In words, this states that the probability of the system entering state <math>y</math> at time <math>n + 1</math> is a function of the summed products of the initial probability density and the probability of state <math>y</math> given state <math>x</math> <ref name="Lee2004" />.


==Invariant Distributions==
==Invariant Distributions==
Line 27: Line 29:


==References==
==References==
<references />
<references />[[Category:Suggestion Bot Tag]]
 
[[Category:Mathematics Workgroup]]
[[Category:CZ Live]]

Latest revision as of 06:00, 16 September 2024

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

A Markov chain is a Markov process with a discrete time parameter [1]. The Markov chain is a useful way to model systems with no long-term memory of previous states. That is, the state of the system at time is solely a function of the state , and not of any previous states [2].

A Formal Model

The influence of the values of on the distribution of can be formally modelled as:

Eq. 1

In this model, is any desired subset of the series . These indexes commonly represent the time component, and the range of is the Markov chain's state space [1].

Probability Density

The Markov chain can also be specified using a series of probabilities. If the initial probability of the state is , then the transition probability for state occurring at time can be expressed as:

Eq. 2

In words, this states that the probability of the system entering state at time is a function of the summed products of the initial probability density and the probability of state given state [2].

Invariant Distributions

In many cases, the density will approach a limit that is uniquely determined by (and not ). This limiting distribution is referred to as the invariant (or stationary) distribution over the states of the Markov chain. When such a distribution is reached, it persists forever[2].

References

  1. 1.0 1.1 Neal, R.M. (1993) Probabilistic Inference using Markov Chain Monte Carlo Methods. Technical Report TR-931. Department of Computer Science, University of Toronto http://www.cs.toronto.edu/~radford/review.abstract.html
  2. 2.0 2.1 2.2 Peter M. Lee (2004) Bayesian Statistics: An Introduction. New York: Hodder Arnold. 368 p.