We consider multirate loss models of random arriving calls with fixed bandwidth requirements and elastic bandwidth allocation during service. We consider two types of calls, elastic and adaptive. Elastic calls that can reduce their bandwidth, while simultaneously increasing their service time, compose the so‐called elastic traffic (e.g., file transfer). Adaptive calls that can tolerate bandwidth compression, but their service time cannot be altered, compose the so‐called adaptive traffic (e.g., adaptive video).
In the elastic EMLM (E‐EMLM), we consider a link of capacity b.u. that accommodates elastic calls of different service classes. Calls of service class arrive in the link according to a Poisson process with an arrival rate and request b.u. (peak‐bandwidth requirement). To introduce bandwidth compression, we permit the occupied link bandwidth to virtually exceed up to a limit of b.u. Suppose that a new call of service class arrives in the link while the link is in (macro‐) state . Then, for call admission, we consider three cases [1]:
When , the compressed bandwidth of the newly accepted call of service‐class k, is given by [ 1]:
where denotes the compression factor.
Since , where is the number of in‐service calls of service‐class k (in the steady state), and , the values of can be expressed by:
To keep constant the product service time by bandwidth per call (when bandwidth compression occurs), the mean service time of the new service‐class call becomes :
The compressed bandwidth of all in‐service calls changes to for and . Similarly, their remaining service time increases by a factor of . The minimum bandwidth given to a service‐class call is:
where and is common for all service‐classes.
Note that increasing T decreases the and increases the delay (service‐time) of service‐class calls (compared to the initial service time ). Thus, T should be chosen so that this delay remains within acceptable levels.
When an in‐service call, with compressed bandwidth , completes its service and departs from the system, then the remaining in‐service calls expand their bandwidth to in proportion to , as follows:
In terms of the system's state‐space , the CAC is expressed as follows. A new call of service‐class k is accepted in the system if the system is in state upon a new call arrival, where . Hence, the CBP of service‐class is determined by the state space :
Unfortunately, the compression/expansion of bandwidth destroys the LB between adjacent states in the E‐EMLM, or, equivalently, it destroys the reversibility of the system's Markov chain, and therefore no PFS exists for the values of (a fact that makes 3.6 inefficient). To show that the Markov chain of the E‐EMLM is not reversible, an efficient way is to apply the so called Kolmogorov's criterion [3,4]: A Markov chain is reversible if and only if the product of transition probabilities along any loop of adjacent states is the same as that for the reversed loop.1
To circumvent the non‐reversibility problem in the E‐EMLM, are replaced by the state‐dependent compression factors per service‐class , which not only have a similar role with but also lead to a reversible Markov chain [5]. Thus, (3.1) becomes:
Reversibility facilitates the recursive calculation of the link occupancy distribution (see Section 3.1.2). To ensure reversibility, must have the form [ 5]:
where and .
In (3.8), is a state multiplier associated with state , whose values are chosen so that holds whenever , that is, is given by [ 5]:
In order to show that ( 3.8) ensures reversibility in the E‐EMLM, consider a system with two service‐classes and the following four adjacent states (depicted in Figure 3.2): , , and . If we apply the Kolmogorov's criterion flow clockwise = flow counter‐clockwise to the four adjacent states, we have . By substituting the values of according to ( 3.8), we see that the previous equation holds, i.e., , and therefore the Markov chain is reversible.
Consider again Example 3.1 ().
The solution of this linear system is:
Then, based on and ( 3.6), we obtain the approximate values of CBP:
(compare with the exact 0.1748) |
(compare with the |
exact 0.3574) |
Table 3.2 The values of the state dependent compression factors (Example 3.3).
0 | 0 | 1.0 | 1.00 | 1.00 | 2 | 0 | 1.0 | 1.00 | 1.00 |
0 | 1 | 1.0 | 1.00 | 1.00 | 2 | 1 | 1.3333 | 0.75 | 0.75 |
0 | 2 | 1.3333 | 0.00 | 0.75 | 3 | 0 | 1.0 | 1.00 | 1.00 |
1 | 0 | 1.0 | 1.00 | 1.00 | 3 | 1 | 1.9999 | 0.6667 | 0.50 |
1 | 1 | 1.0 | 1.00 | 1.00 | 4 | 0 | 1.3333 | 0.75 | 0.00 |
1 | 2 | 1.7778 | 0.75 | 0.5625 | 5 | 0 | 2.2222 | 0.60 | 0.00 |
The steady state transition rates of the E‐EMLM are shown in Figure 3.4. According to this, the GB equation (rate in = rate out) for state is given by:
where , , and are the probability distributions of the corresponding states , respectively.
Assume now the existence of LB between adjacent states. Equations (3.11) and (3.12) are the detailed LB equations which exist because the Markov chain of the modified model is reversible. Then, based on Figure 3.4, the following LB equations are extracted:
for and .
Based on the LB assumption, the probability distribution has the solution:2
where is the offered traffic‐load in erl and is the normalization constant given by:
Note that although the Markov chain has become reversible, the probability distribution of (3.13) is not a PFS due to the summation of (3.9) needed for the determination of . We proceed by defining , as:
where is the set of states in which exactly b.u. are occupied by all in‐service calls, i.e., .
Consider now two different sets of macro‐states: (i) and (ii) . For the first set, no bandwidth compression takes place and the values of are determined by the classical Kaufman–Roberts recursion (1.39). For the second set, aiming at deriving a similar recursion, we first substitute ( 3.8) in ( 3.11) to obtain:
Multiplying both sides of (3.16) by and summing over k, we have:
Equation (3.17), due to ( 3.9) is written as:
Summing both sides of (3.18) over and based on (3.15), we have:
or
The combination of (1.39) and (3.20) results in the recursive formula of the E‐EMLM [ 5]:
The following performance measures can be determined based on (3.21):
Consider again Example 3.1 ().
The normalization constant is . The state probabilities are:
(For comparison, the CBP results of the EMLM are and , for b.u.)
b.u. |
Then, we determine the mean number of in‐service‐calls based on (3.24):
Consider a link of capacity b.u. that accommodates calls of service‐classes with the following traffic characteristics: service‐class 1 erl, b.u., service‐class 2 erl, b.u. Assuming four different values of , and 56 b.u., calculate the CBP and of the two service‐classes and the link utilization when and increase in steps of 2 erl (up to erl).
Figure 3.5 presents the CBP of both service‐classes while Figure 3.6 presents the link utilization for the various values of . Note that when b.u., the system behaves as in the EMLM. In the ‐axis of both figures, point 1 refers to while point 8 refers to . According to Figure 3.5, the CBP of both service‐classes decrease as the value of increases. This is expected due to the bandwidth compression mechanism that let calls enter the system with reduced (compressed) bandwidth. Similarly, the CBP reduction leads to an increase of the link utilization as a result of accepting more calls in the system.
In the literature, a similar model exists called the balanced fair share model [7,8], which aims to share the capacity of a single link among elastic calls already accepted for service according to a balanced fairness criterion, whereas the E‐EMLM focuses on call admission. In what follows, we show that the bandwidth compression mechanism of the E‐EMLM and the model of [ 7, 8] provide the same bandwidth compression.
In [ 8], a link accommodates Poisson arriving calls of elastic service‐classes. The link capacity is shared among calls according to the balanced fairness criterion: when , all calls use their peak‐bandwidth requirement, while when , all calls share the capacity in proportion to their peak‐bandwidth requirement and the link operates at its full capacity . The main difference between the E‐EMLM and the model of [ 8] lies on the fact that in the E‐EMLM, the notion of allows for admission control, whereas there is no such parameter in [ 8]. The application of balanced fairness in multirate networks and its comparison with other classical bandwidth allocation policies, such as max–min fairness and proportional fairness can be found in [9,10]. See Example 3.11 for a comparison between the approach of [ 9] and the compression mechanism of the E‐EMLM under a threshold policy.
According to [ 8], the balanced fair sharing of all calls of service‐class in state is given by:
where is a balance function defined by:
and is the unit line vector with 1 in the element and 0 elsewhere. According to (3.27), when , (3.26) is written as [ 8]:
The balance fair share model determines (through (3.28)) the total bandwidth which will be allocated to each service‐class, whereas the E‐EMLM determines (through ( 3.8)) the percentage of the peak‐bandwidth per call requirement which will be assigned to each call of each service‐class. This percentage has a limit which is defined through the parameter, while the absence of in the balance fair share model means that there is no limitation in bandwidth compression. The relationship between the of the E‐EMLM and the of the balance fair share model is . To show this, let service‐classes (for presentation purposes). Assuming that in state , i.e., or , calls of service‐class use their peak‐bandwidth requirements, the balanced fairness allocation gives:
Similarly,
So, in state , where C nb, the bandwidth allocated to a service‐class k call is:
Based on ( 3.8) and ( 3.9), the E‐EMLM gives:
assuming that (i.e., no bandwidth compression takes place in state ). Combining (3.31) with (3.32), we obtain .
We now consider the multiservice system of the E‐EMLM under the BR policy (E‐EMLM/ BR): A new service‐class call is accepted in the link, if after its acceptance, the occupied link bandwidth , where refers to the BR parameter used to benefit (in CBP) calls of other service‐classes apart from (see also the EMLM/BR in Section 1.3.2).
In terms of the system state‐space , the CAC is expressed as follows. A new call of service‐class is accepted in the system if the system is in state upon a new call arrival, where . Hence, the CBP of service‐class is determined by the state space :
As far as the compression factors and the state dependent compression factors per service‐class are concerned, they are determined by (3.2) and ( 3.8), respectively.
Consider again Example 3.1 (). Assume that and b.u., so that .
The solution of this linear system is:
Then, based on the values of and (3.33), we obtain the exact value of equalized CBP:
. |
The solution of this linear system is:
Then, based on the values of and ( 3.33), we obtain the value of equalized CBP:
, |
which is quite close to the exact value of 0.3234.
In the E‐EMLM/BR, the link occupancy distribution, , can be calculated in an approximate way, according to the Roberts method (see Section 1.3.2.2), which leads to the following recursive formula [11]:
This formula is similar to (1.64) of the EMLM/BR. If for all then the E‐EMLM results. In addition, if then we have the classical EMLM.
The following performance measures can be determined based on (3.34):
Consider again Example 3.6 ().
The normalization constant is:
The state probabilities are:
(compare with the exact value of 0.3234) |
b.u. |
From ( 3.24), we obtain the mean values of in‐service calls:
. |
Consider a link of capacity b.u. that accommodates calls of service‐classes with the following traffic characteristics: service‐class 1 erl, b.u., service‐class 2 erl, b.u. The maximum allowed bandwidth compression ratio to all in‐service calls is . The BR policy is applied in order to achieve CBP equalization between the two service‐classes. Compare the CBP and of the two service‐classes and the link utilization, obtained by the E‐EMLM/BR, the E‐EMLM, the EMLM/BR, and the EMLM, when increases in steps of 1 erl (up to erl). Also provide simulation results for the E‐EMLM/BR.
For bandwidth compression, we choose because the maximum bandwidth compression is achieved for . For CBP equalization under the BR policy, we choose and since (in this way we have an equal number of blocking states for the two service classes). Figures 3.9 and 3.10 present the analytical and the simulation CBP results of service‐classes 1 and 2, respectively, in the case of the E‐EMLM. For comparison, we give the corresponding analytical CBP results of the EMLM. In Figure 3.11, we present the analytical and simulation CBP results (equalized CBP) in the case of the E‐EMLM/BR. Simulation results are based on SIMSCRIPT III and are mean values of 12 runs (no reliability ranges are shown because they are very small). For comparison, we give the corresponding analytical results for the EMLM/BR. In Figure 3.12, we present the link utilization (analytical results only) for all models. In the ‐axis of all figures, point 1 refers to , while point 8 refers to . From the figures, we observe the following:
We now consider the multi‐service system of the E‐EMLM under the TH policy (E‐EMLM/ TH), as follows. A new call of service‐class is accepted in the system, of b.u., if [12]:
In terms of the system state‐space , the CAC is expressed as follows. A new call of service‐class is accepted in the system if the system is in state upon a new call arrival, where . Hence, the CBP of service‐class is determined by the state space :
This example clarifies the bandwidth compression‐expansion mechanism in conjunction with the TH policy. Consider a link of b.u. and let b.u. The link accommodates calls of service‐classes whose calls require and b.u., respectively. The TH policy is applied to calls of both service‐classes by assuming that and . For simplicity, assume that .
The solution of this linear system is:
Then, based on the values of and (3.36), we determine the exact CBP:
The solution of this linear system is:
Then, based on the values of and ( 3.36), we determine the CBP for each service‐class:
, |
which are close to the exact values of 0.2436 and 0.6076, respectively.
Table 3.3 The state space and the occupied link bandwidth (Example 3.9).
(before compression) | (after compression) | |||
0 | 0 | 1.00 | 0 | 0 |
0 | 1 | 1.00 | 4 | 4 |
1 | 0 | 1.00 | 2 | 2 |
1 | 1 | 0.6667 | 6 | 4 |
2 | 0 | 1.00 | 4 | 4 |
2 | 1 | 0.5 | 8 | 4 |
3 | 0 | 0.6667 | 6 | 4 |
Table 3.4 The values of the state dependent compression factors (Example 3.9).
0 | 0 | 1.0 | 1.00 | 1.00 | 2 | 0 | 1.0 | 1.00 | 1.00 |
0 | 1 | 1.0 | 1.00 | 1.00 | 2 | 1 | 2.5 | 0.6 | 0.4 |
1 | 0 | 1.0 | 1.00 | 1.00 | 3 | 0 | 1.5 | 0.6667 | 0.0 |
1 | 1 | 1.5 | 0.6667 | 0.6667 |
The steady state transition rates of the E‐EMLM/TH are shown in Figure 3.4. According to this, the GB equation for state is given by (3.10), while the LB equations are given by ( 3.11) and ( 3.12) where: .
Based on the LB assumption, the probability distribution has the solution of ( 3.13) where the normalization constant is given by (3.14). Since is the occupied link bandwidth, we consider two different sets of macro‐states: (i) and (ii) . For set (i), no bandwidth compression takes place and are determined via (1.73). For set (ii), we follow the analysis of the E‐EMLM up to (3.19). Since , we have:
Thus, ( 3.19) can be written as:
where is the probability that b.u. are occupied when the number of service‐class in‐service calls is , and is determined by (1.74). The combination of (1.73) and (3.38) results in the recursive formula of the E‐EMLM/TH [ 12]:
The following performance measures can be determined based on (3.39):
In ( 3.39), knowledge of is required. Since when , we distinguish two regions of : (i) and (ii) .
For the first region (where no bandwidth compression occurs), consider a system of capacity that accommodates all service‐classes but service‐class . For this system, we define as follows:
Based on , we compute the unnormalized , recursively, via the following formula:
For the second region (where bandwidth compression occurs), the values of can be determined (taking into account ( 3.9)) by:
where .
Consider again Example 3.9 ().
where | calculated via (3.45) while | |
and | calculated via (3.44) while . |
The normalization constant is . The state probabilities are:
Consider again Example 3.9. We show the relationship between the E‐EMLM/TH and the balanced fair share model of [ 9]. In [ 9], the balanced fair sharing is applied to access tree networks. The loss system of Example 3.9 can be represented as the simple access tree network of Figure 3.15. According to [9], the balanced fair sharing of all calls of service‐class in state is given by ( 3.26), where is a balance function for a tree network, defined by:
where link has capacity , is the set of service‐classes going through link , and is the unit line vector with 1 in component and 0 elsewhere.
Calculate the values of for states , and of Example 3.9, through ( 3.26) and (3.46) (of the balance fair share model), where bandwidth compression occurs, find the allocated bandwidth per service‐class call and compare it with that of the E‐EMLM/TH. Then, repeat for state (3,1) and comment.
State (1,1):
Thus . This means that in state (1,1) the call of service‐class 1 has an allocated bandwidth of b.u. according to [ 9]. This is the same as the allocated bandwidth provided according to the E‐EMLM/TH (see Table 3.4 for the value of ): .
State (2,1):
Thus . This means that in state (2,1) a call of service‐class 1 has an allocated bandwidth of b.u. This is the same as the allocated bandwidth provided according to the E‐EMLM/TH: .
State (3,0):
Thus .
This means that in state (3,0) a call of service‐class 1 has an allocated bandwidth of b.u. This is the same as the allocated bandwidth provided according to the E‐EMLM/TH (see Table 3.4): .
Based on the above, we see that the E‐EMLM/TH results in the same bandwidth compression values with those provided by the balanced fair sharing of [ 9].
State (3,1): In the case of the E‐EMLM, the network (Figure 3.15) cannot be in state (3,1) due to the parameter. The total requested bandwidth (before compression, b.u.) would exceed the virtual capacity of b.u. and therefore the call should be blocked and lost. On the contrary, in the case of the balanced fair share model which does not have the parameter , (3,1) is a permissible state and the CAC should allocate the following bandwidth per service‐class call:
Thus . This means that in state (3,1) a call of service‐class 1 has an allocated bandwidth of b.u. Compare this value with the corresponding maximum compressed values of the E‐EMLM, which is b.u.
Consider a link of capacity b.u. and three values of : (i) b.u., (ii) b.u., and (iii) b.u. The link accommodates calls of service‐classes with the following traffic characteristics:
5.0 erl | 1.5 erl | 1.0 erl | 2 b.u. | 5 b.u. | 9 b.u. | 25 calls | 11 calls | 6 calls |
In the elastic adaptive EMLM (EA‐EMLM), we consider a link of capacity b.u. that accommodates service‐classes which are distinguished into elastic service‐classes and adaptive service‐classes, . The call arrival process remains Poisson. As already mentioned, adaptive traffic is considered a variant of elastic traffic in the sense that adaptive calls can tolerate bandwidth compression without altering their service time.
The bandwidth compression/expansion mechanism and the CAC of the EA‐EMLM are the same as those of the E‐EMLM (Section 3.1.1). The only difference is in (3.3), which is applied only on elastic calls. Similar to the E‐EMLM, the corresponding Markov chain in the EA‐EMLM does not meet the necessary and sufficient Kolmogorov's criterion for reversibility between four adjacent states.
Consider Example 3.1 () and assume that calls of service‐class 2 are adaptive. This example clarifies the differences between the E‐EMLM and the EA‐EMLM.
The solution of this linear system is:
Then, based on the values of and ( 3.6), we determine the exact CBP:
(compare with 0.1748 of the E‐EMLM in Example 3.1) |
(compare with 0.3574 of the E‐EMLM in Example 3.1) |
The comparison between the CBP obtained in the EA‐EMLM and the E‐EMLM reveals that the E‐EMLM does not approximate the EA‐EMLM. In addition, the CBP of the EA‐EMLM are lower, a fact that it is intuitively expected since, in this example, adaptive calls remain less time in the system than the corresponding elastic calls.
To circumvent the non‐reversibility problem in the EA‐EMLM, similar to the E‐EMLM, are replaced by the state‐dependent factors per service‐class , , which individualize the role of per service‐class in order to lead to a reversible Markov chain [6]. Thus the compressed bandwidth of service‐class calls is determined by (3.7), while the values of are given by ( 3.8). However, due to the adaptive service‐classes, in the EA‐EMLM the values of the state multipliers are determined by [ 6]:
where .
The derivation of (3.47) is based on the following assumptions:
Now, we can derive ( 3.47) by substituting (3.49), 3.50, and 3.8 into 3.48.
Consider again Example 3.13 , .
The solution of this linear system is:
Then, based on the values of and ( 3.6), we determine the CBP:
(compare with the exact 0.128 in Example 3.13) |
(compare with the exact 0.2907 in Example 3.13) |
Table 3.5 The values of the state dependent factors (Example 3.14).
0 | 0 | 1.0 | 1.00 | 1.00 | 2 | 0 | 1.0 | 1.00 | 1.00 |
0 | 1 | 1.0 | 1.00 | 1.00 | 2 | 1 | 1.1667 | 0.8571 | 0.8571 |
0 | 2 | 1.0 | 0.00 | 1.00 | 3 | 0 | 1.0 | 1.00 | 1.00 |
1 | 0 | 1.0 | 1.00 | 1.00 | 3 | 1 | 1.5667 | 0.7447 | 0.6383 |
1 | 1 | 1.0 | 1.00 | 1.00 | 4 | 0 | 1.3333 | 0.75 | 0.00 |
1 | 2 | 1.1333 | 0.8824 | 0.8824 | 5 | 0 | 2.2222 | 0.60 | 0.00 |
The steady state transition rates of the EA‐EMLM are shown in Figure 3.4. According to this, the GB equation for state is given by ( 3.10) and the LB equations by ( 3.11) and ( 3.12).
Similar to the E‐EMLM, we consider two different sets of macro‐states: (i) and (ii) . For set (i), no bandwidth compression takes place and are determined by the classical Kaufman–Roberts recursion (1.39). For set (ii), we substitute ( 3.8) in ( 3.11) to have:
Multiplying both sides of (3.51a) by and summing over , we take:
Similarly, multiplying both sides of (3.51b) by and , and summing over , we take:
By adding (3.52) and (3.53), we have:
Based on ( 3.47), (3.54) is written as:
Summing both sides of (3.55) over and based on the fact that , we have:
The combination of (1.39) and (3.56) leads to the recursive formula of the EA‐EMLM [ 6]:
The following performance measures can be determined based on (3.57):
Consider Example 3.14 ().
The normalization constant is .
The state probabilities are:
Then, based on ( 3.24), we obtain
and . |
Consider Example 3.5 () and let calls of service‐class 1 be adaptive. Assuming four values of , and 56 b.u., calculate the CBP and of the two service‐classes and the link utilization when and increase in steps of 2 erl (up to erl). Compare the CBP and the link utilization results with those obtained by the E‐EMLM in Example 3.5.
Figure 3.24 presents the CBP of both service‐classes while Figure 3.25 presents the link utilization for the three values of . Observe that when b.u., the system behaves as in the EMLM. In the ‐axis of both figures, point 1 refers to , while point 8 refers to . According to Figure 3.24, the CBP of both service‐classes decrease as the value of increases. Compared to Figure 3.5, we see that (i) the CBP of the EA‐EMLM are much lower than the corresponding CBP of the E‐EMLM and (ii) the impact of the increase of on the CBP is higher in the EA‐EMLM. Both (i) and (ii) can be explained by the fact that in‐service calls of service‐class 1 remain less in the system when they are adaptive, thus allowing more calls to be accepted in the system. As far as the link utilization is concerned, the values of in the EA‐EMLM are slightly lower (worse) than those of the E‐EMLM because of the adaptive service‐class 1 calls.
We now consider the multiservice system of the EA‐EMLM under the BR policy (EA‐EMLM/BR) [13]. A new service‐class call is accepted in the link if, after its acceptance, the occupied link bandwidth , where refers to the BR parameter used to benefit (in CBP) calls of other service‐classes apart from . In terms of the system state‐space , the CBP is expressed according to ( 3.33).
Consider again Example 3.13 , . Assume that and b.u., so that .
The solution of this linear system is:
Then, based on the and ( 3.32), we obtain the exact value of equalized CBP:
(compare with 0.3234 of the E‐EMLM/BR in Example 3.6). |
The solution of this linear system is:
Then, based on and ( 3.32), we obtain the value of equalized CBP:
which is quite close to the exact value of 0.2674.
In the EA‐EMLM/BR, the link occupancy distribution, , is calculated in an approximate way, according to the following recursive formula (Roberts method) [ 13]:
This formula is similar to (3.34) of the E‐EMLM/BR. If for all , then the EA‐EMLM results. In addition, if , then we have the classical EMLM.
The following performance measures can be determined via (3.60):
Consider again Example 3.17 ).
The normalization constant is
(compare with the value of 0.3171 obtained in the E‐EMLM/BR of Example 3.7)
b.u. |
Then, based on ( 3.24), we have
and . |
Consider a link of capacity b.u. that accommodates calls of four service‐classes, with the following traffic characteristics:
12 erl | 6 erl | 3 erl | 2 erl | 1 b.u. | 2 b.u. | 4 b.u. | 10 b.u. | 9 b.u. | 8 b.u. | 6 b.u. | 0 b.u. |
Calls of service‐classes 1 and 2 are elastic, while calls of service‐classes 3 and 4 are adaptive. Compare the CBP of all service‐classes obtained by the EA‐EMLM/BR and the EA‐EMLM when the offered traffic‐loads increase in steps of 2, 1, 0.5, and 0.25 erl up to erl. Also provide simulation results for the EA‐EMLM/BR.
Figure 3.27 presents the analytical and the simulation equalized CBP results of all service‐classes, respectively, in the case of the EA‐EMLM/BR. For comparison, we include the corresponding analytical CBP results of the EA‐EMLM. In the ‐axis of Figure 3.27, point 1 refers to , while point 7 is . Simulation results are based on SIMSCRIPT III and are mean values of 12 runs (no reliability ranges are shown). From Figure 3.27, we observe that (i) analytical and simulation CBP results are very close and (ii) the EA‐EMLM fails to capture the behavior of the EA‐EMLM/BR (hence, both models are necessary).
We now consider the EA‐EMLM under the TH policy (EA‐EMLM/TH), as in the case of the E‐EMLM/TH (Section 3.3). That is, the total number of in‐service calls per service‐class must not exceed a threshold (per service‐class). The bandwidth compression/expansion mechanism and the CAC in the EA‐EMLM/TH are the same as those of the E‐EMLM/TH, but ( 3.3) is applied only on elastic calls to satisfy their service time requirement.
Consider Example 3.9 () and assume that calls of service‐class 2 are adaptive. This example clarifies the differences between the E‐EMLM/TH and the EA‐EMLM/TH.
The solution of this linear system is:
Then, based on the values of and ( 3.36), we determine the exact CBP for each service‐class:
and |
(much lower than 0.2436 and 0.6076, respectively, of the E‐EMLM/TH in Example 3.9).
The solution of this linear system is:
Then, based on the values of and ( 3.36), we determine the CBP for each service‐class:
(compare with the exact 0.2031) | |
(compare with the exact 0.5282) |
Table 3.6 The values of the state dependent factors (Example 3.20).
0 | 0 | 1.0 | 1.00 | 1.00 | 2 | 0 | 1.0 | 1.00 | 1.00 |
0 | 1 | 1.0 | 1.00 | 1.00 | 2 | 1 | 1.6667 | 0.70 | 0.60 |
1 | 0 | 1.0 | 1.00 | 1.00 | 3 | 0 | 1.5 | 0.6667 | 0.0 |
1 | 1 | 1.1667 | 0.8571 | 0.8571 |
The steady state transition rates of the EA‐EMLM/TH are shown in Figure 3.4. According to this, the GB equation for state is given by ( 3.10), while the LB equations are given by ( 3.11) and ( 3.12) where .
Similar to the EA‐EMLM, we consider two different sets of macro‐states: (i) and (ii) . For set (i), no bandwidth compression takes place and are determined via (1.73). For set (ii), in order to derive a recursive formula, we follow the analysis of the EA‐EMLM up to ( 3.55). Since , we have:
Thus, ( 3.55) can be written as:
where expresses the blocking constraint of the TH policy and is given by (1.74).
Summing both sides of (3.62) over and based on the fact that , we have:
The combination of (1.73) and (3.63) results in the formula of the EA‐EMLM/TH [14]:
The following performance measures can be determined based on (3.64):
In ( 3.64) knowledge of is required. Since when , we consider two subsets of : (i) and (ii) . In both subsets, we assume that .
For the first subset, let a system of capacity that accommodates all service‐classes but service‐class . For this system, we define via (3.43). Based on , we compute through (3.44). For the second subset, can be determined via (3.45).
Consider again Example 3.20 ().
where | and |
The normalization constant is .
The state probabilities are:
. |
Consider Example 3.12 and assume that service‐class 1 is elastic while the other two service‐classes are adaptive. Assuming the EA‐EMLM/TH and the EA‐EMLM, calculate the CBP of all service‐classes for b.u. and , and 5 calls.
The results of both the EA‐EMLM/TH and EA‐EMLM are presented in Figures 3.29, 3.30 and 3.31 for the three service‐classes, respectively. Observe that the EA‐EMLM fails to approximate the CBP results obtained by the EA‐EMLM/TH in the cases of and 4. With the same explanation as in the case of Example 3.12, the two models give quite close CBP results for .
We discuss the applicability of the models in the context of new architectural and functional enhancements of next‐generation (5G) cellular networks. It is widely acknowledged that 5G systems will extensively rely on software‐defined networking (SDN) and network function virtualization (NFV), which have attracted a lot of research efforts and gained tremendous attention from both academic and industry communities. The SDN technology is the driver towards completely programmable networks, which can be achieved by decoupling the control and data planes [15,16]. On the other hand, the NFV technology allows executing the software‐based network functions on general‐purpose hardware via virtualization [17,18]. SDN and NFV, due to their complementary nature, are traditionally seen as related concepts and implemented together [19]. Some of the expected benefits of SDN/NFV include CAPEX and OPEX reduction for network operators, by reducing the cost of hardware and automating services, flexibility in terms of deployment and operation of new infrastructure and applications, faster innovation cycles due to the creation of enhanced services/applications, and new business models. Due to these benefits, SDN/NFV will play a major role in the emerging 5G systems [20]. In what follows, we briefly describe the considered SDN/NFV based cellular network architecture, shown in Figure 3.32, and its main elements [21].
The realization of an intelligent radio access network (RAN) is greatly facilitated by SDN and NFV technologies [22]. SDN enables abstraction and modularity of the network functions at the RAN level. As a consequence, a hierarchical control architecture can be implemented, in which the high control layer controls lower layers by specifying procedures and without the requirement to have access to the specific implementation details of the lower layers [23]. Such an implementation, however, requires a holistic view of the cellular network at the higher control layer to be designed by taking into account appropriate abstraction of lower layers via well‐defined control interfaces. This is essential to enable programmable radio resource management (RRM) functions, such as radio resource allocation (RRA) and CAC. On the other hand, NFV technology allows the execution of control programs on general purpose computing/storage resources [24]. This is contrary to the traditional approach in which the BS consists of a tightly coupled software and hardware platforms. Hence, an NFV‐based BS may have some network functions implemented as physical network functions, while other functions are implemented as virtual network functions (VNFs). An advantage of VNFs is that the underlying hardware can be efficiently utilized since VNFs run on shared NFV infrastructure (NFVI).
The architecture of Figure 3.32 relies on the SDN concept, whose different layers are depicted in Figure 3.33. In the control layer, the SDN controller provides a global view of the available underlying resources to one or more network applications that are located at the application layer. This communication is done using the so‐called northbound open application programming interface (API). On the other hand, the southbound open API is used to configure the forwarding elements (FEs) that are located at the infrastructure layer. The configuration of FEs is performed by the SDN controller, which sends control messages to the SDN agents located within the FEs.
The main elements at the RAN level are small cell BSs (SBSs), macro BSs (MBSs), WiFi APs, local offload gateways (LO‐GWs) [25], and mobile users (MUs). These entities are controlled by the local SDN controller (LSC). The geographical area of the RAN consists of a number of clusters. Each cluster typically consists of many cells and is under the control of a single LSC. For example, in Figure 3.32 the first cluster contains one MBS, one SBS, one WiFi AP, one LO‐GW, and four MUs, whereas the second cluster contains one MBS and three MUs. MUs can freely move between clusters or even may belong to more than one cluster at the same time.
When a network entity wishes to establish a connection, it sends the request to the corresponding LSC of the cluster. Upon receiving the request, the LSC will identify the appropriate destination address for the requested connection. In particular, the LSC will forward the request to either the appropriate in‐cluster recipient (e.g. MU or MBS) or the mobile core network (MCN) if the recipient is outside the cluster. To be able to perform this, the LSC maintains the knowledge of the cluster topology as well as the external connections towards the MCN and neighbouring clusters. The LSC is also responsible for multi radio access technologies (RATs) coordination, that is, it takes the RRA decisions in geographical areas where multiple RATs are available (e.g., LTE and WiFi). In the cache‐enabled mode [26], the LSC takes caching decisions within the cluster by exploiting the knowledge of content popularity and available in‐cluster resources. Another important function of the LSC is in‐cluster content routing. Upon receiving the connection request from an MU, the LSC constructs the path from the content source (if the source is within the cluster) or the border entity (if the source is outside the cluster) towards the requesting MU. The LSC then modifies the flow tables at the FEs along the content delivery path. Finally, the LCS is responsible for MU mobility within its cluster. Hence, mobility‐related information does not need to be sent over to the MCN. In Figure 3.34, the basic components of a virtualized RAN are shown. Multiple virtual BSs (VBSs) may run on top of the NFVI, essentially sharing the resources of the same physical infrastructure.
The MCN consists of the mobile cloud computing (MCC) infrastructure, mobile content delivery network (M‐CDN) servers, packet data network (PDN) GWs and serving GWs (S‐GWs). The control is performed by one or more core SDN controllers (CSCs). A CSC receives and handles the connection requests from the RAN via the corresponding LSCs. A CSC is also responsible for storage (e.g., M‐CDN), compute (e.g., MCC), spectrum, and energy resources, and for providing QoS support. Finally, the PDN‐GW (or simply P‐GW) forwards traffic to/from the Internet and other external IP networks, whereas the S‐GW receives/sends traffic from/to the RAN.
We continue by describing our considered RAN model in the SDN/NFV‐enabled cellular network [ 21]. We assume a cluster of VBSs controlled by a single LSC at the RAN level (Figure 3.34). The cluster has a fixed number of VBSs. For the purposes of the analysis it is assumed that the amount of radio resources in the RAN can be discretized and is measured in resource units (RUs). The RU definition depends on the adopted channel access scheme. When schemes based on the FDMA and the TDMA are used, the RU can be defined as an integer number of frequency carriers or time slots. On the other hand, when schemes based on the CDMA are used, the definition of the RU must take into account the multiple access interference [27]. To this end, the notions of the cell load and load factor have been used [28,29]. In [ 21], the LTE orthogonal FDMA (OFDMA) scheme is adopted. We define a RU to be equal to a single OFDMA resource block (RB). For example, if the LTE channel bandwidth is 9 MHz and one subcarrier is 15 kHz, then there are in total 600 subcarriers. Since one OFDMA RB corresponds to 12 subcarriers, we have 50 RUs per channel per time slot. Similarly, a 13.5 MHz LTE channel has 75 RUs and a 18 MHz LTE channel has 100 RUs.
Let us denote by the total number of RUs in the RAN. RUs are dynamically allocated by the LSC to the VBSs such that the VBS receives RUs. Hence, at any given moment it must hold that:
Considering different service‐classes, they are distinguished by the number of RUs requested by a single call that originates from a MU. We assume that the calls follow a Poisson distribution. The arrival rate of service‐class calls is denoted as . A service‐class call requests RUs. Let be the occupied RUs in the cluster and assume that can virtually exceed up to a limit of RUs. Then the CAC of a service‐class call may follow the one described in Section 3.1.1 for the E‐EMLM.
We showed the method of applying the E‐EMLM in next‐generation networks. It is worth mentioning that almost all teletraffic models can be applied in the same manner, after some necessary adjustments.
An interesting extension of the E‐EMLM includes the co‐existence of stream and elastic traffic with or without prioritization of stream traffic [30–32]. In both cases there is no single recursive formula for the calculation of the link occupancy distribution. In the case of stream traffic prioritization, the CBP calculation of stream (not elastic) calls can be based on the Kaufman–Roberts formula. On the other hand, in the case of no prioritization, the CBP determination of stream and elastic calls can only be based on approximate (non‐recursive) algorithms [ 32]. An interesting extension of the E‐EMLM is proposed in [33], in which the E‐EMLM is described as a multirate loss‐queueing model with a finite number of buffers, a fact that provides a springboard for the efficient analysis of multirate queues. A generalization of [ 33] that provides a framework for the calculation of various queueing characteristics for each elastic service‐class is proposed in [34]. The case of adaptive traffic is studied in [35].
Another interesting model (since it is recursive) for elastic traffic in wireless networks has been proposed in [36], where the uplink of a CDMA cell is analysed. Service‐class calls arrive following a Poisson process with rate and having a peak transmission bit rate requirement, . The service time is exponentially distributed with mean . When sending with the peak bit rate, the required target ratio of the received power from the mobile terminal to the total interference energy at the BS is given by [ 36]:
where is the bit‐energy‐to‐noise ratio and is the spread spectrum bandwidth.
Let be the number of in‐service service‐class calls and the power received at the BS from the user equipment (UE). The power should fulfil the equation [ 36]:
where is the total power received by the BS within its cell, is the total power received from other cells and is the background noise power.
Based on (3.68) we obtain the following equation for [ 36]:
According to the denominator of (3.69), we see that the CAC must prevent becoming larger than .
Finally, it can be proved (see [ 36,37]) that there exists a recursive formula for the determination of CBP in a CDMA cell of maximum capacity which accommodates Poisson arriving calls of different service‐classes with offered traffic‐load and bandwidth requirement .