Congestion Avoidance 35
dividing up the available bandwidth, CBWFQ did not give any specific regard to the delay
or jitter being introduced by queuing packets.
The LLQ mechanism is CBWFQ with a single PQ, which receives strict scheduling
priority. To go back to airline analogies, this is the equivalent of preboarding courtesies that
are often offered to persons with special needs or those traveling with small children. In
spite of the fact that these people may not be in first class, or elite frequent fliers, they are
moved directly to the front of the line and put on the plane first because they have special
needs. In the case of VoIP traffic, it may not be the most important traffic on your network,
but it has very specific requirements for delay and jitter and, therefore, must be moved to
the front of the line for transmission.
Catalyst switches use classification to appropriate queuing frames for transmission.
Although Catalyst switches only support the Cisco IOS features WFQ, CBWFQ, and LLQ
on WAN interfaces, Ethernet interfaces use similar forms of queuing but vary in configu-
ration and behavior.
Scheduling
Scheduling allows resource sharing, specifically bandwidth, among classes of traffic or
queues; this scheduling becomes more important as congestion increases. On Cisco
switching platforms, frames are scheduled in several ways. The most commonly discussed
is weighted round-robin (WRR). Because the functionality and configuration of WRR on
each specific platform is discussed in the chapter for that platform, this chapter does not
attempt to explain the functionality of WRR. Instead, this section previews what is to come
in future chapters. For additional information about WRR and its implementation on
Catalyst switches, refer to Chapter 6, “QoS Features Available on the Catalyst 2950 and
3550 Family of Switches,” Chapter 7, “QoS Features Available on the Catalyst 4000 IOS
Family of Switches and the Catalyst G-L3 Family of Switches,” and Chapter 8, “QoS
Support on the Catalyst 6500.
Congestion Avoidance
In contrast to congestion management, which deals with congestion that already exists,
congestion avoidance mechanisms are designed to prevent interfaces or queues from
becoming congested in the first place. The actual methods used to avoid congestion are
discussed shortly, but keep in mind that the entire concept of congestion avoidance is based
on the presence of TCP traffic.
To understand why the benefits of congestion avoidance are only realized when the
majority of network traffic is TCP, it is important to understand the fundamental behavior
of TCP. This overview is intended to provide just such an understanding, and is not intended
to be a comprehensive discussion of TCP’s behavior.
36 Chapter 2: End-to-End QoS: Quality of Service at Layer 3 and Layer 2
Unlike UDP, which has no acknowledgment mechanism, with TCP, when a packet is
received, the receiver acknowledges receipt of that packet by sending a message back to the
sender. If no acknowledgment is received within a period of time, the sender first assumes
that data is being sent too rapidly and reduces the TCP window size, and then the sender
resends the unacknowledged packet(s). Gradually, the sender increases the rate at which
packets are being sent by increasing the window size.
Without congestion avoidance, when a particular queue becomes completely full,
something called tail drop occurs. Tail drop is the term used to describe what happens when
the (n +1) packet arrives at a queue that is only capable of holding n packets. That packet
is dropped and, consequently, no acknowledgment is sent to the sender of that packet. As
previously mentioned, this causes a reduction in the window size and a retransmission.
Tail drop presents several problems, however. Most troublesome is the fact that tail drop
does not use any intelligence to determine which packet(s) should be dropped; rather the
(n + 1) packet is just dropped, regardless of what type of packet it is. This could mean that
a packet of the most important traffic type in your network is dropped, whereas a packet
from the least important traffic type is transmitted. Another problem associated with tail
drop is global synchronization, which reduces the overall throughput of a link with many
flows (but is beyond the scope of this discussion).
All Catalyst switches experience tail drop when transmit queues become full. However,
several Catalyst switches support configuration of tail-drop thresholds among various
queues to minimize full conditions with higher-priority traffic. Chapters 6 and 8 discuss
tail-drop thresholds and configuration options available on the Catalyst 3550 Family and
Catalyst 6500 Family of switches, respectively.
Random Early Detection (RED)
The problems of tail drop and global synchronization can both be addressed with
congestion avoidance. Congestion avoidance is sometimes called active queue
management, or Random Early Detection (RED). The Introduction section of RFC 2309
defines the need for active queue management as follows:
The traditional technique for managing router queue lengths is to set a maximum length (in terms of
packets) for each queue, accept packets for the queue until the maximum length is reached, then reject
(drop) subsequent incoming packets until the queue decreases because a packet from the queue has been
transmitted. This technique is known as “tail drop”, since the packet that arrived most recently (i.e., the one
on the tail of the queue) is dropped when the queue is full. This method has served the Internet well for
years, but it has two important drawbacks.
1. Lock-Out
In some situations tail drop allows a single connection or a few flows to monopolize queue space,
preventing other connections from getting room in the queue. This “lock-out” phenomenon is often the
result of synchronization or other timing effects.
Congestion Avoidance 37
2. Full Queues
The tail drop discipline allows queues to maintain a full (or, almost full) status for long periods of time,
since tail drop signals congestion (via a packet drop) only when the queue has become full. It is important
to reduce the steady-state queue size, and this is perhaps queue management’s most important goal.
The naive assumption might be that there is a simple tradeoff between delay and throughput, and that the
recommendation that queues be maintained in a “non-full” state essentially translates to a recommendation
that low end-to-end delay is more important than high throughput. However, this does not take into account
the critical role that packet bursts play in Internet performance. Even though TCP constrains a flow’s
window size, packets often arrive at routers in bursts [Leland94]. If the queue is full or almost full, an
arriving burst will cause multiple packets to be dropped. This can result in a global synchronization of flows
throttling back, followed by a sustained period of lowered link utilization, reducing overall throughput.
The point of buffering in the network is to absorb data bursts and to transmit them during the (hopefully)
ensuing bursts of silence. This is essential to permit the transmission of bursty data. It should be clear why
we would like to have normally-small queues in routers: we want to have queue capacity to absorb the
bursts. The counter-intuitive result is that maintaining normally-small queues can result in higher
throughput as well as lower end-to-end delay. In short, queue limits should not reflect the steady state
queues we want maintained in the network; instead, they should reflect the size of bursts we need to absorb.
Cisco IOS Software devices implement RED as Weighted RED (WRED), which serves the
same purpose as RED, but does so with preferential treatment given to packets based on
their IP precedence or DSCP value. If it is not desirable in your network to give different
treatment to packets of different IP precedence or DSCP values, it is possible to modify the
configuration so that all packets are treated the same.
The basic configuration for WRED in a Cisco IOS router is fairly simple:
Router(config)#interface Serial 6/0
Router(config-if)#random-detect
With only that configuration, IP precedence-based WRED is enabled, and all the defaults
are accepted. The configuration can be verified with the show queueing interface
command as demonstrated in Example 2-1.
Example 2-1 Verifying the WRED Configuration on a Cisco IOS Router Interface
Router#show queueing interface serial 6/0
Interface Serial6/0 queueing strategy: random early detection (WRED)
Exp-weight-constant: 9 (1/512)
Mean queue depth: 0
class Random drop Tail drop Minimum Maximum Mark
pkts/bytes pkts/bytes thresh thresh prob
0 0/0 0/0 20 40 1/10
1 0/0 0/0 22 40 1/10
2 0/0 0/0 24 40 1/10
3 0/0 0/0 26 40 1/10
4 0/0 0/0 28 40 1/10
5 0/0 0/0 31 40 1/10
6 0/0 0/0 33 40 1/10
7 0/0 0/0 35 40 1/10
rsvp 0/0 0/0 37 40 1/10
38 Chapter 2: End-to-End QoS: Quality of Service at Layer 3 and Layer 2
There are several values shown in the show queueing output that have not yet been
explained.
Minimum threshold—When the average queue depth exceeds the minimum
threshold, packets start to be discarded. The rate at which packets are dropped
increases linearly until the average queue depth hits the maximum threshold.
Maximum threshold—When the average queue size exceeds the maximum
threshold, all packets are dropped.
Mark probability denominator—This number is a fraction and represents the
fraction of packets that are dropped when the queue size is at the maximum threshold.
In the preceding example, the mark probability denominator indicates that all IP
precedence levels will have 1 of every 10 packets dropped when the average queue
depth equals the maximum threshold.
In the show queueing output, notice that the minimum threshold is different for each IP
precedence level. It was previously mentioned that it is possible to modify the configuration
so that all IP precedence or DSCP values are treated the same. Example 2-2 shows one way
to do this.
Example 2-2 Configuring WRED Parameters on a Cisco IOS Router Interface
Router(config)#interface s6/0
Router(config-if)#random-detect precedence 1 ?
<1-4096> minimum threshold (number of packets)
Router(config-if)#random-detect precedence 1 20 ?
<1-4096> maximum threshold (number of packets)
Router(config-if)#random-detect precedence 1 20 40 ?
<1-65535> mark probability denominator
<cr>
Router(config-if)#random-detect precedence 1 20 40 10
Router(config-if)#random-detect precedence 2 20 40 10
Router(config-if)#random-detect precedence 3 20 40 10
Router(config-if)#random-detect precedence 4 20 40 10
Router(config-if)#random-detect precedence 5 20 40 10
Router(config-if)#random-detect precedence 6 20 40 10
Router(config-if)#random-detect precedence 7 20 40 10
Router(config-if)#exit
Router(config)#exit
Router#show queueing interface s6/0
Interface Serial6/0 queueing strategy: random early detection (WRED)
Exp-weight-constant: 9 (1/512)
Mean queue depth: 0
class Random drop Tail drop Minimum Maximum Mark
pkts/bytes pkts/bytes thresh thresh prob
0 0/0 0/0 20 40 1/10
1 0/0 0/0 20 40 1/10
2 0/0 0/0 20 40 1/10
Congestion Avoidance 39
As you can see from the show queueing output, the minimum threshold, maximum
threshold, and mark probability denominator are now the same for all IP precedence values.
This is not necessarily recommended; instead, it is shown to illustrate that the treatment of
packets is entirely user configurable.
It is also possible to use DSCP-based WRED, and the configuration options for that differ
slightly, as demonstrated in Example 2-3.
3 0/0 0/0 20 40 1/10
4 0/0 0/0 20 40 1/10
5 0/0 0/0 20 40 1/10
6 0/0 0/0 20 40 1/10
7 0/0 0/0 20 40 1/10
rsvp 0/0 0/0 37 40 1/10
Example 2-3 Configuring DSCP-based WRED on a Cisco IOS Router Interface
Router(config)#interface s6/0
Router(config-if)#random-detect ?
dscp-based Enable dscp based WRED on an interface
prec-based Enable prec based WRED on an interface
<cr>
Router(config-if)#random-detect dscp-based
Router(config-if)#random-detect ?
dscp parameters for each dscp value
dscp-based Enable dscp based WRED on an interface
exponential-weighting-constant weight for mean queue depth calculation
flow enable flow based WRED
prec-based Enable prec based WRED on an interface
precedence parameters for each precedence value
<cr>
Router(config-if)#random-detect dscp ?
<0-63> Differentiated services codepoint value
af11 Match packets with AF11 dscp (001010)
af12 Match packets with AF12 dscp (001100)
af13 Match packets with AF13 dscp (001110)
af21 Match packets with AF21 dscp (010010)
af22 Match packets with AF22 dscp (010100)
af23 Match packets with AF23 dscp (010110)
af31 Match packets with AF31 dscp (011010)
af32 Match packets with AF32 dscp (011100)
af33 Match packets with AF33 dscp (011110)
af41 Match packets with AF41 dscp (100010)
af42 Match packets with AF42 dscp (100100)
af43 Match packets with AF43 dscp (100110)
cs1 Match packets with CS1(precedence 1) dscp (001000)
cs2 Match packets with CS2(precedence 2) dscp (010000)
cs3 Match packets with CS3(precedence 3) dscp (011000)
cs4 Match packets with CS4(precedence 4) dscp (100000)
cs5 Match packets with CS5(precedence 5) dscp (101000)
Example 2-2 Configuring WRED Parameters on a Cisco IOS Router Interface (Continued)
continues
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset