Computing the Steady-State Vector of a Markov Chain
|
Introduction
|
|
This worksheet demonstrates the use of Maple to investigate Markov-chain models.
The following Maple techniques are highlighted:
•
|
Creating a custom function
|
•
|
Solving a specific example
|
•
|
Creating a generic solution
|
|
|
Introduction to Markov Chains
|
|
A Markov Chain is a weighted digraph representing a discrete-time system that can be in any number of discrete states. The nodes of the digraph represent the states, and the directed edge weight between two states a and b represents the probability (called the transition probability from a to b) that the system will move to state b in the next time period, given that it is currently in state a. The sum of the transition probabilities out of any node is, by definition, 1. The set of probabilities is stored in a transition matrix P, where entry (i, j) is the transition probability from state i to state j. Clearly, the sum of each row of P is 1.
A well-known theorem of Markov chains states that the probability of the system being in state j after k time periods, given that the system begins in state i, is the (i, j) entry of .
A common question arising in Markov-chain models is, what is the long-term probability that the system will be in each state? The vector containing these long-term probabilities, denoted , is called the steady-state vector of the Markov chain.
This Maple application creates a procedure for answering this question. As a case study, we'll analyze a two-server computer network whose servers have known probabilities of going down or being fixed in any given hour. The goal is to compute the long-term probability that at least one server is working.
|
|
Algorithm for Computing the Steady-State Vector
|
|
We create a Maple procedure called steadyStateVector that takes as input the transition matrix of a Markov chain and returns the steady state vector, which contains the long-term probabilities of the system being in each state. The input transition matrix may be in symbolic or numeric form.
The procedure steadyStateVector implements the following algorithm: Given an n x n transition matrix P, let I be the n x n identity matrix and Q = P - I. Let e be the n-vector of all 1's, and b be the (n+1)-vector with a 1 in position n+1 and 0 elsewhere. To compute the steady state vector, solve the following linear system for , the steady-state vector of the Markov chain:
Appending e to Q, and a final 1 to the end of the zero-vector on the right-hand side ensures that the solution vector has components summing to 1.
|
|
Procedure Code
|
|
Here is the steadyStateVector procedure. The input is a transition matrix P, and the output is the steady-state vector reflecting the long-term probability of the system being in each state. Comments are preceded by the # symbol.
>
|
steadyStateVector := proc( P::Matrix )
#DECLARE LOCAL VARIABLES
local n, Q, e, i, QT, b;
#MAKE THE PROCEDURE SELF CONTAINED BY LOADING REQUIRED PACKAGES INSIDE THE PROCEDURE
use LinearAlgebra in
#EXTRACT THE DIMENSION OF THE TRANSITION MATRIX P
n := Dimension( P )[1];
#Q = P - I
Q := P - IdentityMatrix( n );
#e IS THE VECTOR OF ALL ONES
e := <seq(1, i=1..n)>;
#APPEND VECTOR e TO Q AND TRANSPOSE THE RESULT
QT := Transpose( <Q | e> );
#b IS THE UNIT VECTOR WITH 1 IN POSITION n+1
b := UnitVector( n+1, n+1 );
#SOLVES THE LINEAR SYSTEM QT*pi = b.
return LeastSquares( QT, b );
end use:
end proc:
|
|
|
Example Application: Reliability of a Two-Server Network
|
|
The problem is to estimate the long-term probability that at least one server in a two-server computer network is working during any given hour. We'll model this problem as a Markov chain as follows:
Assume the network can be in one of three states:
1.
|
Both servers are working
|
3.
|
Neither server is working
|
Let:
•
|
be the probability that a server fails when both were okay an hour ago
|
•
|
be the probability that the second server fails when one was okay an hour ago
|
•
|
be the probability that a broken server gets fixed when one was okay an hour ago
|
•
|
be the probability that a broken server gets fixed when both were down an hour ago
|
The transition matrix is then the following:
>
|
P := Matrix( [ [1-mu[1], mu[1], 0],
[lambda[1], 1-(lambda[1]+mu[2]), mu[2]],
[0, lambda[2], 1-lambda[2]] ] );
|
Note that the steadyStateVector procedure computes symbolically. Numerical values for and are not required.
>
|
pi := steadyStateVector( P );
|
is currently a symbolic vector. Let's supply some example values:
>
|
lambda[1]:=1/400; lambda[2]:=1/800; mu[1]:=1/4; mu[2]:=1/8;
|
Ask Maple for the value of , and the updated vector is given, reflecting the inputs above. By inspection, we see the vector is stochastic.
>
|
pi := steadyStateVector( P );
|
The long-term probability that at least one server is operable in any given hour is the sum of the last two components of .
>
|
probWorking := pi[2] + pi[3];
|
If we want a 10-digit floating-point approximation to this probability, use the evalf command.
>
|
evalf( probWorking, 10 );
|
Erase the current values of lambda and mu.
>
|
lambda := 'lambda'; mu := 'mu';
|
Let's generalize this Markov chain, allowing for the possibility of both servers going down at once or both being repaired at once.
>
|
P := Matrix( [ [1-(mu[1]+mu[3]), mu[1], mu[3]],
[lambda[1], 1-(lambda[1]+mu[2]), mu[2]],
[lambda[3], lambda[2], 1-(lambda[2]+lambda[3])] ] );
|
>
|
pi := steadyStateVector( P );
|
What about a purely numeric matrix?
>
|
P2 := Matrix( [[1/4, 1/2, 1/4], [1/3, 1/3, 1/3], [1/4, 1/2, 1/4]]);
|
>
|
pi := steadyStateVector( P2 );
|
>
|
P3 := Matrix( [[.25, .45, .3], [.13, .33, .64], [.2, .6, .2]]);
|
>
|
pi := steadyStateVector( P3 );
|
|
Return to Index for Example Worksheets
|