**The Markov Chain**

A Markov chain is a discrete random process that exhibits the Markov property. This means that there are a finite number of “states” or “positions,” and determining the next state is random. The Markov property states that the probability of the next event or “state” is only dependent on the current state. Namely:

For those not familiar with probability-speak, this says that the probability of event x given the outcomes of x1, x2,…xn (all events up to the current event) is equal to the probability of event x given the occurrence of event xn (the current event). In other words, all previous events except for the last occurring event do not matter.

Let’s take the example of weather. We will use discrete-time measurements to make this simpler. Let’s say you wake up every day, and at exactly 8:00 AM you note the weather. Also, let’s say you only care whether it is sunny or rainy. Given this information, you want to predict what tomorrow’s weather will be like. After several months of careful observation, you construct a model that estimates the probabilities of transitioning between our two states: sunny or rainy. We will call “sunny” state 1 and “rainy” state 2.

P[“Tomorrow is sunny” given that “Today is sunny”] which can be re-written:

a11 = P[W(n+1) = sunny | W(n) = sunny] = 0.8

Following that notation, we write the other probabilities:

a12 = P[W(n+1) = rainy | W(n) = sunny] = 0.2

a21 = P[W(n+1) = sunny | W(n) = rainy] = 0.6

a22 = P[W(n+1) = rainy | W(n) = rainy] = 0.4

Our “a” indicates a probability of state transition. “a11” is the probability of transitioning from state 1 to state 1. “a21” is the probability of transitioning from state 2 to state 1. Note that because this model has the Markov property, only today’s weather matters in trying to predict tomorrow’s weather. From this, we can construct the state transition probability matrix A:

Notice that summing across the rows yields 1. One of the laws of probabilities states that the probabilities of all the possible outcomes must sum to 1 (e.g. 100% chance of occurrence). If it is rainy today, there are only 2 possibilities for tomorrow: sunny or rainy. This can be easily viewed as a state diagram:

If we wanted to know the percentage of days that were sunny or rainy, we can find the steady-state vector. This is accomplished by:

This can be approximated by raising the state transition probability matrix to a large number. For example, A^100. This results in the following:

Which indicates that 75% of the days will be sunny and 25% of the days will be rainy.

Markov chains see wide use in many applications, including statistics, economics, physics, chemistry, biology, queuing theory, and as we will see, cognitive radio.

I’ve always found the other method of calculating the steady state probabilities to be pretty interesting also.

qP = q

qP = qI

q(P-I) = 0

you end up with n simultaneous equations for n variables (plus the constraint that they must sum to 1).

PS, your sunny/rainy transition probabilities are way off for seattle.

The other method is a lot nicer for getting the exact answer as opposed to an approximation. Raising the matrix to some extreme number is easier to visualize and plug in to MATLAB.

And you are correct, I wasn’t modeling Seattle. I imagine the probabilities for sunny/rainy would be flipped :P