14.5 Equilibrium Conditions

Looking at the Tolsky machine example, it is easy to think that eventually all market shares or state probabilities will be either 0 or 1. This is usually not the case. Equilibrium share of the market values or probabilities are normally encountered. The probabilities are called steady-state probabilities or equilibrium probabilities.

One way to compute the equilibrium share of the market is to use Markov analysis for a large number of periods. It is possible to see if the future values are approaching a stable value. For example, it is possible to repeat Markov analysis for 15 periods for Tolsky’s machine. This is not too difficult to do by hand. The results for this computation appear in Table 14.1.

The machine starts off functioning correctly (in state 1) in the first period. In period 5, there is only a 0.4934 probability that the machine is still functioning correctly, and by period 10, this probability is only 0.360235. In period 15, the probability that the machine is still functioning correctly is about 0.34. The probability that the machine will be functioning correctly at a future period is decreasing—but it is decreasing at a decreasing rate. What would you expect in the long run? If we made these calculations for 100 periods, what would happen? Would there be an equilibrium in this case? If the answer is yes, what would it be? Looking at Table 14.1, it appears that there will be an equilibrium at 0.333333, or 1/3. But how can we be sure?

By definition, an equilibrium condition exists if the state probabilities or market shares do not change after a large number of periods. Thus, at equilibrium, the state probabilities for a future period must be the same as the state probabilities for the current period. This fact is the key to solving for the steady-state probabilities. This relationship can be expressed as follows:

From Equation 14-4, it is always true that

π(Next period)=π(This period)P

or

π(n+1)=π(n)P

At equilibrium, we know that

π(n+1)=π(n)

Therefore, at equilibrium,

π(n+1)=π(n)P=π(n)

So

π(n)=π(n)P

or, dropping the n term,

π=πP
(14-6)

Equation 14-6 states that at equilibrium, the state probabilities for the next period are the same as the state probabilities for the current period. For Tolsky’s machine, this can be ­expressed as follows:

π=πP(π1,π2)=(π1,π2)[0.80.20.10.9]

Using matrix multiplication, we get

(π1, π2)=[(π1)(0.8)+(π2)(0.1), (π1)(0.2)+(π2)(0.9)]

The first term on the left-hand side, π1, is equal to the first term on the right-hand side (π1)(0.8)+(π2)(0.1). In addition, the second term on the left-hand side, π2, is equal to the second term on the right-hand side (π1)(0.2)+(π2)(0.9). This gives us the following:

π1=0.8π1+0.1π2
(a)
π2=0.2π1+0.9π2
(b)

We also know that the state probabilities—π1 and π2, in this case—must sum to 1. (Looking at Table 14.1, you note that π1 and π2 sum to 1 for all 15 periods.) We can express this property as follows:

Table 14.1 State Probabilities for the Machine Example for 15 Periods

PERIOD STATE 1 STATE 2
1 1.000000 0.000000
2 0.800000 0.200000
3 0.660000 0.340000
4 0.562000 0.438000
5 0.493400 0.506600
6 0.445380 0.554620
7 0.411766 0.588234
8 0.388236 0.611763
9 0.371765 0.628234
10 0.360235 0.639754
11 0.352165 0.647834
12 0.346515 0.653484
13 0.342560 0.657439
14 0.339792 0.660207
15 0.337854 0.662145
π1+π2++πn=1
(c)

For Tolsky’s machine, we have

π1+π2=1
(d)

Now, we have three equations for the machine (a, b, and d). We know that Equation d must hold. Thus, we can drop Equation a or Equation b and solve the remaining two equations for π1 and π2. It is necessary to drop one of the equations so that we end up with two unknowns and two equations. If we were solving for equilibrium conditions that involved three states, we would end up with four equations. Again, it would be necessary to drop one of the equations so that we end up with three equations and three unknowns. In general, when solving for equilibrium conditions, it will always be necessary to drop one of the equations such that the total number of equations is the same as the total number of variables for which we are solving. The reason that we can drop one of the equations is that they are interrelated mathematically. In other words, one of the ­equations is redundant in specifying the relationships among the various equilibrium equations.

Let us arbitrarily drop Equation a. Thus, we will be solving the following two equations:

π2=0.2π1+0.9π2π1+π2=1

Rearranging the first equation, we get

0.1π2=0.2π1

or

π2=2π1

Substituting this into Equation d, we have

π1+π2=1

or

π1+2π1=1

or

3π1=1π1=1/3=0.33333333

Thus,

π2=2/3=0.66666667

Compare these results with Table 14.1. As you can see, the steady-state probability for state 1 is 0.33333333, and the equilibrium state probability for state 2 is 0.66666667. These values are what you would expect by looking at the tabled results. This analysis indicates that it is necessary to know only the matrix of transition probabilities in determining the equilibrium market shares. The initial values for the state probabilities or the market shares do not influence the equilibrium state probabilities. The analysis for determining equilibrium state probabilities or market shares is the same when there are more states. If there are three states (as in the grocery store example), we have to solve three equations for the three equilibrium states; if there are four states, we have to solve four simultaneous equations for the four unknown equilibrium values, and so on.

You may wish to prove to yourself that the equilibrium states we have just computed are, in fact, equilibrium states. This can be done by multiplying the equilibrium states by the ­original matrix of transition probabilities. The results will be the same equilibrium states. Performing this analysis is also an excellent way to check your answers to end-of-chapter problems or ­examination questions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset