2.3 Least Cost Opportunistic Routing
In the above analysis, we assume the node location information is known and make opportunistic routing decision locally. Next, we introduce least cost opportunistic routing and two polynomial time distributed algorithms to compute the end-to-end opportunistic paths that achieve the least cost. First, we introduce the metric of Expected Opportunistic Transmission Count (EOTX).
2.3.1 Expected Opportunistic Transmission Count (EOTX)
The expected opportunistic transmission count is proposed in Dubois-Ferriere et al. (2007). It is a generalization of the ETX metric in traditional routing (Couto et al. 2003). For a node ni and its forwarding candidate set , the EOTX is defined in Equation (2.15).
where has the same definition in Equation (2.13). The physical meaning of is the probability of at least one forwarding candidate receiving the packet correctly sent by ni per transmission. So the EOTX is the expected number of transmissions necessary for at least one candidate in correctly receiving the packet transmitted by ni.
2.3.2 End-to-end Cost of Opportunistic Routing
The end-to-end cost of opportunistic routing from a node ni to the destination nd is defined in Equation (2.16).
where is the remaining opportunistic path cost intuitively defined below.
2.17
where
2.18
2.3.3 Properties of LCOR
The goal of least cost opportunistic routing (Dubois-Ferriere et al. 2007) or shortest anypath routing (Laufer and Kleinrock 2008b) is to find the forwarding candidates at each hop to achieve a minimum end-to-end cost Ds in Equation (2.16) from the source node ns to the destination node nd. This problem looks complicated because the least cost or shortest anypath from a source to a destination depends on the least cost from the forwarding candidates of the source node, and those costs are further dependent on the forwarding candidates of those forwarding candidates, etc. It seems that we need to enumerate all the possible forwarding candidates and their forwarding priorities to find the shortest anypath. However, as proved in (Laufer and Kleinrock 2008b), polynomial algorithms exist to solve this combinatorial problem. Now, we present five lemmas proved in (Laufer and Kleinrock 2008b), and they will help us to understand the polynomial algorithms that will be presented in Section 2.3.4 and Section 2.3.5.
Let δi be the distance of the shortest anypath from a node ni to the destination nd, and Φi be the corresponding optimal forwarding candidate set.
Lemma 2.3 Let Di be the distance of a node ni via forwarding set and let be the distance via forwarding set , where Dk ≥ Dj for every node . We have if and only if Di ≥ Dk.
This lemma indicates that for a node ni, it is always beneficial to involve its neighbor node nk into the forwarding candidate set in order to obtain a shorter distance to the destination if the distance from nk to the destination is larger than the distance from any current candidate to the destination but is smaller than the distance from ni to the destination.
Lemma 2.4 The shortest distance δi of a node ni is always no shorter than the shortest distance δj of any node nj in the optimal forwarding set Φi. That is, we have .
Lemma 2.4 guarantees that if a node ni uses its neighbor node nj in its optimal forwarding set Φi, the shortest distance δi can never be smaller than δj. This is equivalent to the restriction that all weights in the graph must be non-negative in Dijkstra's algorithm.
Lemma 2.5 If a node ni uses a node nk in its optimal forwarding set Φi and δi = δk, node nk can be safely removed from Φi without changing δi. The link ni → nk is said to be “redundant”.
Lemma 2.5 says if the shortest distances from node ni and nk to the destination are the same, the shortest distance δi via forwarding candidate set Φi (where nk ∈ Φi) is the same as the shortest distance via forwarding set Φi − nk. That is, the shortest distance from node ni to the destination does not change no matter whether it uses nk in its forwarding candidate set or not.
Lemma 2.6 If the shortest distances from the neighbors of a node ni to a destination are δ1 ≤ δ2 ≤…≤ δk, Φi is always of the form Φi = 〈n1, n2, …, nr〉, for some r ∈ {1, 2, …, k}.
By Lemma 2.6, for a node ni, the optimal forwarding candidate set Φi is a subset of ni's neighbors with the shortest distances to the destination. That is, given a set of neighbors with distances δ1 ≤ δ2 ≤…≤ δk, the best forwarding candidate set Φi is always one of 〈n1〉, 〈n1, n2〉, 〈n1, n2, n3〉, …, 〈n1, n2, …, nr〉. As a result, any forwarding candidate set with gaps between the neighbors, such as 〈n2, n3〉 or 〈n1, n4〉, can never yield the shortest distance to the destination. This property is the key factor that reduces the complexity of the shortest anypath algorithms from exponential to polynomial time. For k neighbors, we do not have to test every one of the 2k − 1 possible forwarding candidate sets. Instead, we only need to check at most k forwarding candidate sets.
Lemma 2.7 Assume for a node ni, Φi = 〈n1, n2, …, nr〉 with shortest distances δ1 ≤ δ2 ≤…≤ δr. If is the distance from ni via forwarding candidate set 〈n1, n2, …, nj〉 (1 ≤ j ≤ r), we always have .
Lemma 2.7 explains another important property necessary for the shortest anypath algorithm to converge. Assuming now that the best forwarding set Φi = 〈n1, n2, …, nr〉 with distances δ1 ≤ δ2 ≤…≤ δr, the distance Di monotonically decreases as each of the forwarding candidate sets 〈n1〉, 〈n1, n2〉, …, 〈n1, n2, …, nr〉 is used.
2.3.4 Dijkstra-Based Algorithm
We now introduce the Dijkstra-based algorithm, the shortest anypath first algorithm, proposed in Laufer and Kleinrock (2008b). Given a graph G = (V, E), where V is the node set and E is the edge set, the algorithm calculates the shortest anypaths from all nodes to a destination node nd. For every node ni ∈ V, the algorithm keeps an estimate Di which is an upper bound of the distance of the shortest anypath from ni to nd. In addition, this algorithm also keeps a forwarding candidate set for every node, which stores the set of nodes used as the next hops to reach nd. Finally, two data structures, S and Q, are used. The S set stores the set of nodes for which there is already a shortest anypath defined. Each node ni ∈ V − S in which a shortest anypath has not found is stored in a priority queue Q keyed by their Di values.
Algorithm 2.1 consists of |V| rounds, dictated by the number of elements initially in Q. At each round, the EXTRACT-MIN procedure extracts the node with the minimum distance to the destination from Q. Let this node be nj. At this point, nj is settled and inserted into S, since the shortest anypath from nj to the destination is now known. For each incoming edge ni → nj ∈ E, if the distance Di is larger than the distance Dj, node nj is added to the forwarding candidate set Fi and the distance Di is updated.
Algorithm 2.1 Shortest Anypath First
Input: G, nd for each node ni ∈ V do Di ← ∞ ← ∅
end for Dd ← 0 S ← ∅ Q ← V while Q ≠ ∅ do nj ← EXTRACT-MIN(Q) S ← S ∪ {nj} for each edge ni → nj ∈ E do J ← Fi ∪ nj if Di > Dj then Di ← diJ + DJ ← J
end if end for end while
The optimality of Algorithm 2.1 is proved in Laufer and Kleinrock (2008b). The complexity of this algorithm is O(|V|log|V| + |E|) which is the same complexity as Dijkstra's algorithm (Cormen et al. 2001).
2.3.5 Bellman—Ford-Based Algorithm
Algorithm 2.1 solves the shortest anypath problem in polynomial time, but it requires that the node knows the global information of the network (i.e. the topology of the network and link state). Next, we introduce a Bellman–Ford-based algorithm (Laufer et al. 2010), which can solve the shortest anypath problem in a distributed way. By applying Lemma 2.6 presented in Section 2.3.3, this algorithm reduces the complexity of the Bellman–Ford based anypath generalization proposed in (Dubois-Ferriere et al. in press) from exponential to polynomial time. As in the regular Bellman–Ford algorithm for single-path routing (Cormen et al. 2001), this algorithm takes at most |V| − 1 rounds. At each round, every node ni stores its neighbors in a priority queue Q keyed by their cost. This algorithm then checks each neighbor nj in ascending order of cost Dj, and verifies whether Di is larger than Dj. If that is the case, node ni includes nj in its forwarding candidate set and updates its distance accordingly. Intuitively, this algorithm works in the same expanding-ring fashion as the regular Bellman–Ford algorithm, settling at each round the costs of the nodes one hop further away from the destination. Since an anypath can not be longer than |V| − 1 hops, the algorithm converges after at most |V| − 1 iterations.
The running time of Algorithm 2.2 depends on how Q is implemented. Assuming a Fibonacci heap, each of the EXTRACT-MIN operations takes at most log(|V|) running time. The for loop in lines 6–14 runs once for each link, for a total aggregated time of O(|E|log|V|). The total complexity of this algorithm is
Algorithm 2.2 Bellman–Ford-Based Shortest Anypath Algorithm
Input: G, nd for each node ni ∈ V do Di ← ∞ ← ∅
end for Dd ← 0 for k ← 1 to |V| − 1 do for each node ni ∈ V do J ← ∅ Q ← GET-NEIGHBORS(ni) while Q ≠ ∅ do nj ← EXTRACT-MIN(Q) J ← J ∪ {nj} if Di > Dj then Di ← diJ + DJ ← J
end if end while end for end for
then O(|V||E|log|V|), which is only a factor of log|V| higher than the regular Bellman–Ford algorithm.