Continuous-time Markov process

Continuous-time Markov process

In probability theory, a continuous-time Markov process is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space; it is the continuous-time version of a Markov chain. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.

Contents

Mathematical definitions

Directed graph representation of a Markov process describing the state of financial markets (note: numbers are made-up).

A Markov process, like a Markov chain, can be thought of as a directed graph of states of the system. The difference is that, rather than transitioning to a new (possibly the same) state at each time step, the system will remain in the current state for some random (in particular, exponentially distributed) amount of time and then transition to a different state. The process is characterized by "transition rates" qij between states i and j. Let X(t) be the random variable describing the state of the process at time t, and assume that the process is in a state i at time t. qij (for i \neq j) measures how quickly that i \rightarrow j transition happens. Precisely, after a tiny amount of time h, the probability the state is now j is given by [1]

\Pr(X(t+h) = j | X(t) = i) = q_{ij}h + o(h),\quad i\neq j,

where o(h) represents a quantity that goes to zero faster than h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition (between different states) is roughly proportional to the duration of that interval. The qij are called transition rates because if we have a large ensemble of n systems in state i, they will switch over to state j at an average rate of nqij until n decreases appreciably.

The transition rates qij are typically given as the ij-th elements of the transition rate matrix Q (also known as an intensity matrix[2]). As the transition rate matrix contains rates, the rate of departing from one state to arrive at another should be positive, and the rate that the system remains in a state should be negative. The rates for a given state should sum to zero, yielding the diagonal elements to be

q_{ii} = -\sum_{j\neq i} q_{ij}.

With this notation, and letting pt = Pr(X(t) = j), the evolution of a continuous-time Markov process is given by the first-order differential equation

\frac{\partial}{\partial t} p_t = p_t Q

The probability that no transition happens in some time r is

\Pr(X(s) = i ~\forall~ s\in(t, t+r)\, |\, X(t) = i ) = e^{q_{ii} r}.

That is, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter qii, and continuous-time Markov processes are thus memoryless processes.

A time dependent (time heterogeneous) Markov process is a Markov process as above, but with the q-rate a function of time, denoted qij(t).

Embedded Markov chain

One method of finding the stationary probability distribution, π, of an ergodic continuous-time Markov process, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by


s_{ij} = \begin{cases}
\frac{q_{ij}}{\sum_{k \neq i} q_{ik}} & \text{if } i \neq j \\
0 & \text{otherwise}.
\end{cases}

From this, S may be written as

S = I - D_Q^{-1}Q, \,

where DQ = diag{Q} is the diagonal matrix of Q.

To find the stationary probability distribution vector, we must next find φ such that

\varphi (I - S) = 0, \,

with φ being a row vector, such that all elements in φ are greater than 0 and ||φ||1 = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as

\pi = {-\phi D_Q^{-1} \over \left\|  \phi D_Q^{-1} \right\|_1}.

Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.

Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.

Applications

Markov processes are used to describe physical processes where a system evolves in constant time. Sometimes, rather than a single systems, they are applied to an ensemble of identical, independent systems, and the probabilities are used to find how many members of the ensemble are in a given state.

Chemical reactions

Imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov process, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.

Queueing theory

As a simple example, imagine a first-come-first-served queue where the jobs have exponential size distribution with average size μ and a Poisson arrival rate λ. Then the number of jobs in the queue is a continuous time markov process on the non-negative integers. The transition rate from one number to the next (meaning a new job arrives within the next instant) is λ, and the transition rate to the next lowest number (meaning a job completes in the next instant) is 1 / μ.

See also

References

  1. ^ Parzen, E. (1962) Stochastic Processes, Holden-Day. ISBN 0-8162-6664-6
  2. ^ R. Syski (1992). Passage times for Markov chains. IOS Press. ISBN 905199060X. 
  • William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press. pp. 17–23. ISBN 0691036993. 

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Markov process — In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov, is a time varying random phenomenon for which a specific property (the Markov property) holds. In a common description, a stochastic… …   Wikipedia

  • Markov process — Statistics. a process in which future values of a random variable are statistically determined by present events and dependent only on the event immediately preceding. Also, Markoff process. [1935 40; after Russian mathematician Andrei Andreevich …   Universalium

  • Continuous-time quantum walk — A Continuous time quantum walk (CTQW) is a walk on a given connected graph that is dictated by a time varying unitary matrix that relies on the Hamiltonian of the quantum system and the adjacency matrix. CTQW belongs to what is known as Quantum… …   Wikipedia

  • Semi-Markov process — A continuous time stochastic process is called a semi Markov process or Markov renewal process if the embedded jump chain (the discrete process registering what values the process takes) is a Markov chain, and where the holding times (time… …   Wikipedia

  • Markov decision process — Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for… …   Wikipedia

  • Markov chain — A simple two state Markov chain. A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized …   Wikipedia

  • Poisson process — A Poisson process, named after the French mathematician Siméon Denis Poisson (1781 ndash; 1840), is the stochastic process in which events occur continuously and independently of one another (the word event used here is not an instance of the… …   Wikipedia

  • Contact process (mathematics) — The Contact Process (on a 1 D lattice): Active sites are indicated by grey circles and inactive sites by dotted circles. Active sites can activate inactive sites to either side of them at a rate r/2 or become inactive at rate 1. The contact… …   Wikipedia

  • Diffusion process — For the marketing term, see Diffusion of innovations. In probability theory, a branch of mathematics, a diffusion process is a solution to a stochastic differential equation. It is a continuous time Markov process with continuous sample paths. A… …   Wikipedia

  • Time series — Time series: random data plus trend, with best fit line and different smoothings In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”