Jump to content

Continuous-time Markov chain

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 131.107.0.116 (talk) at 20:39, 5 April 2013 (Mathematical definitions: Error from 07:37, 1 July 2010‎ (UTC) by Meni Rosenfeld. The probability of staying in the same state would start at 0 and become greater than 1 (infinite) as time increases, which is obviously wrong). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In probability theory, a continuous-time Markov chain (CTMC[1] or continuous-time Markov process[2]) is a stochastic process {X(t) : t ≥ 0} that satisfies the Markov property and takes values from a set called the state space; it is the continuous-time version of a Markov chain. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t. In simple terms the process can be thought of as memoryless.

Mathematical definitions

A continuous-time Markov chain, like a discrete-time Markov chain, can be thought of as a directed graph of states of the system. The difference is that, rather than transitioning to a new (possibly the same) state at each time step, the system will remain in the current state for some random (in particular, exponentially distributed) amount of time and then transition to a different state. The process is characterized by "transition rates" between states i and j. Let X(t) be the random variable describing the state of the process at time t, and assume that the process is in a state i at time t. (for ) measures how quickly that transition happens. Precisely, after a tiny amount of time h, the probability the state is now j is given by [3]

where o(h) represents a quantity that goes to zero faster than h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition (between different states) is roughly proportional to the duration of that interval. The are called transition rates because if we have a large ensemble of n systems in state i, they will switch over to state j at an average rate of until n decreases appreciably.

The transition rates are typically given as the ij-th elements of the transition rate matrix Q. As the transition rate matrix contains rates, the rate of departing from one state to arrive at another should be positive, and the rate that the system remains in a state should be negative. The rates for a given state should sum to zero, yielding the diagonal elements to be

With this notation, and letting , the evolution of a continuous-time Markov chain is given by the first-order differential equation

The probability that no transition happens in some time r is

That is, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter , and continuous-time Markov chains are thus memoryless processes.

A time dependent (time heterogeneous) Markov chain is a Markov chain as above, but with the q-rate a function of time, denoted qij(t).

Example

File:FinancialMarkovProcess.png
Directed graph representation of a continuous-time Markov chain describing the state of financial markets (note: numbers are made-up).

The image to the right describes a continuous-time Markov chain with state-space {Bull Market, Bear Market, Recession} and transition rate matrix

Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by

From this, S may be written as

where DQ = diag{Q} is the diagonal matrix of Q.

To find the stationary probability distribution vector, we must next find such that

with being a row vector, such that all elements in are greater than 0 and = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as

Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.

Applications

Markov chains are used to describe physical processes where a system evolves in constant time. Sometimes, rather than a single systems, they are applied to an ensemble of identical, independent systems, and the probabilities are used to find how many members of the ensemble are in a given state. A master equation treatment is often used to analyse systems that evolve as Markov chains [citation needed], with approximations possible for complicated systems [citation needed].

Chemical reactions

Imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chains, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.

Queueing theory

As a simple example, imagine a first-come-first-served queue where the jobs have exponential size distribution with average size and a Poisson arrival rate . Then the number of jobs in the queue is a continuous time Markov chains on the non-negative integers. The transition rate from one number to the next (meaning a new job arrives within the next instant) is , and the transition rate to the next lowest number (meaning a job completes in the next instant) is .

See also

References

  1. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/3-540-48320-9_12, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/3-540-48320-9_12 instead.
  2. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1287/opre.27.3.616, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1287/opre.27.3.616 instead.
  3. ^ Parzen, E. (1962) Stochastic Processes, Holden-Day. ISBN 0-8162-6664-6
  • William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press. pp. 17–23. ISBN 0-691-03699-3.