The following is an example of an undirected graph:
The adjacency matrix, A, for undirected graphs is always symmetric and is defined as:
aij={1if (nodei,nodej) are connected0otherwise,where aij is the (i,j) element of A. The adjacency matrix which describes the example graph above is:
The following is an example of a directed graph:
The adjacency matrix, A, for directed graphs is defined as:
where aij is the (i,j) element of A. The adjacency matrix which describes the example graph above is:
The following is an example of a weighted directed graph:
The adjacency matrix, A, for weighted directed graphs is defined as:
where aij is the (i,j) element of A, and wij is the link weight associated with edge connecting nodes i and j. The adjacency matrix which describes the example graph above is:
Typically, when we discuss weighted directed graphs it is in the context of transition matrices for Markov chains where the link weights across each column sum to 1.
A Markov chain is a stochastic model where the probability of future (next) state depends only on the most recent (current) state. This memoryless property of a stochastic process is called Markov property. From a probability perspective, the Markov property implies that the conditional probability distribution of the future state (conditioned on both past and current states) depends only on the current state.
A Markov/Transition/Stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a non-negative real number representing a probability. Based on Markov property, next state vector xk+1 is obtained by left-multiplying the Markov matrix M with the current state vector xk.
In this course, unless specifically stated otherwise, we define the transition matrix M as a left Markov matrix where each column sums to 1.
Note: Alternative definitions in outside resources may present M as a right markov matrix where each row of M sums to 1 and the next state is obtained by right-multiplying by M, i.e. xTk+1=xTkM.
A steady state vector x∗ is a probability vector (entries are non-negative and sum to 1) that is unchanged by the operation with the Markov matrix M, i.e.
Therefore, the steady state vector x∗ is an eigenvector corresponding to the eigenvalue λ=1 of matrix M. If there is more than one eigenvector with λ=1, then a weighted sum of the corresponding steady state vectors will also be a steady state vector. Therefore, the steady state vector of a Markov chain may not be unique and could depend on the initial state vector.
Suppose we want to build a Markov Chain model for weather predicting in UIUC during summer. We observed that:
The state diagram is shown below:
The Markov matrix is
If the weather on day 0 is known to be rainy, then
and we can determine the probability vector for day 1 by
The probability distribution for the weather on day n is given by