Graphs and Markov chains


Learning Objectives

Graphs

Undirected Graphs:

The following is an example of an undirected graph:

The adjacency matrix, A, for undirected graphs is always symmetric and is defined as:

aij={1if (nodei,nodej) are connected0otherwise,

where aij is the (i,j) element of A. The adjacency matrix which describes the example graph above is:

A=[011100110010100100101011010100000101].

Directed Graphs:

The following is an example of a directed graph:

The adjacency matrix, A, for directed graphs is defined as:

aij={1if nodeinodej0otherwise,

where aij is the (i,j) element of A. The adjacency matrix which describes the example graph above is:

A=[000100110000100000001010010000000101].

Weighted Directed Graphs:

The following is an example of a weighted directed graph:

The adjacency matrix, A, for weighted directed graphs is defined as:

aij={wijif nodeinodej0  otherwise,

where aij is the (i,j) element of A, and wij is the link weight associated with edge connecting nodes i and j. The adjacency matrix which describes the example graph above is:

A=[0000.4000.10.500000.900000001.001.0000.500000000.601.0].

Typically, when we discuss weighted directed graphs it is in the context of transition matrices for Markov chains where the link weights across each column sum to 1.

Markov Chain

A Markov chain is a stochastic model where the probability of future (next) state depends only on the most recent (current) state. This memoryless property of a stochastic process is called Markov property. From a probability perspective, the Markov property implies that the conditional probability distribution of the future state (conditioned on both past and current states) depends only on the current state.

Markov Matrix

A Markov/Transition/Stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a non-negative real number representing a probability. Based on Markov property, next state vector xk+1 is obtained by left-multiplying the Markov matrix M with the current state vector xk.

xk+1=Mxk

In this course, unless specifically stated otherwise, we define the transition matrix M as a left Markov matrix where each column sums to 1.

Note: Alternative definitions in outside resources may present M as a right markov matrix where each row of M sums to 1 and the next state is obtained by right-multiplying by M, i.e. xk+1T=xkTM.

A steady state vector x is a probability vector (entries are non-negative and sum to 1) that is unchanged by the operation with the Markov matrix M, i.e.

Mx=x

Therefore, the steady state vector x is an eigenvector corresponding to the eigenvalue λ=1 of matrix M. If there is more than one eigenvector with λ=1, then a weighted sum of the corresponding steady state vectors will also be a steady state vector. Therefore, the steady state vector of a Markov chain may not be unique and could depend on the initial state vector.

Markov Chain Example

Suppose we want to build a Markov Chain model for weather predicting in UIUC during summer. We observed that:

The state diagram is shown below:

The Markov matrix is

M=[0.60.20.30.10.40.30.30.40.4].

If the weather on day 0 is known to be rainy, then

x0=[010];

and we can determine the probability vector for day 1 by

x1=Mx0.

The probability distribution for the weather on day n is given by

xn=Mnx0.

Review Questions

ChangeLog