next up previous
Next: About this document ... Up: markov Previous: Long-Run Behavior of Markov

Continuous Time Markov Chains

In a continuous time Markov chain, the state transitions may occur at any time, and the time between transitions is exponentially distributed. Since the exponential distribution is memoryless, the future outcome of the process depends only on the present state and does not depend on when the last transition occurred or what any of the previous states were.

We denote the state of the system at time $t$ as $X(t)$. The state probability at time $t$ is the probability that the system is in state $j$ at time $t$, and is denoted as:

\begin{displaymath}
p_{j}(t) = Pr\{X(t)=j\}
\end{displaymath}

The steady-state or limiting probability of being in state $j$ is:

\begin{displaymath}
p_{j} = \lim_{t \rightarrow \infty} p_{j}(t)
\end{displaymath}

And the steady-state vector is given by:

\begin{displaymath}
\vec{p} =
\left[
\begin{array}{cccc} p_{0} & p_{1} & p_{2} & \cdots \end{array} \right]
\end{displaymath}

For a continuous time Markov chain, we can define its intensity matrix or rate matrix, $Q$. The elements $q_{ij}$ of $Q$ indicate the rate of transitions from state $i$ to state $j$ for $i \neq j$. In other words, the time to make a transition to state $j$ given that the process is in state $i$ is exponentially distributed with rate parameter $q_{ij}$. For $i=j$, $q_{ij} = -\sum_{j \neq i} q_{ij}$.

The steady-state probabilities can be found from $Q$ using:

\begin{displaymath}
\vec{p} \cdot Q = 0
\end{displaymath}

and

\begin{displaymath}
\sum_{i} p_{i} = 1
\end{displaymath}