Overview of Integrated and Differentiated Services 13
Figure 1-4 QoS Mechanisms for VoIP in a Mixed-Bandwidth Environment
Overview of Integrated and Differentiated Services
QoS standards fit into three major classifications: integrated services, differentiated
services, and best effort.
Integrated services and differentiated services are discussed individually, but best effort
(BE) is not. BE is just the treatment that packets get when no predetermined treatment is
specified for them. When there is no QoS at all, for example, all traffic is treated as BE. BE
can also be used to refer to that traffic that is not given special (or defined) treatment with
integrated services or differentiated services.
Integrated Services Versus Differentiated Services
Several models have been proposed to provide QoS for the Internet. Each has advantages
and drawbacks, with regard to the Internet, but the model that has been more generally
accepted recently is the differentiated services model. In an enterprise environment,
however, both models can prove very useful. Note that you can also use these models in
combination to achieve end-to-end QoS, taking advantage of the strengths of each model.
At this time, only the differentiated services model is fully supported on the Catalyst 6500.
Call Admission Control (CAC):
Limits the number of calls
allowed to use this link to
three, so that you do not
exceed the allocated Voice
over IP (VoIP) bandwidth.
Packet Classification & Marking:
Match VolP packets based on Source/
Destination and UDP Port Range.
Low Latency Queuing:
Provide enough bandwidth for three VolP
calls, with a promise of low latency, low
jitter, and low packet loss.
384 kbps to other
Branch Office
Frame Relay Traffic Shaping (FRTS) & cRTP:
FRTS tells this end of the connection to
assume it is congested (and engage LLQ)
at 384 kbps, rather than T1 speed.
Compressed Real Time Protocol (cRTP)
reduces the size of RTP (VoIP) headers.
For low-speed links.
14 Chapter 1: Quality of Service: An Overview
Definition of Integrated Services
Integrated services (IntServ) is the name given to QoS signaling. QoS signaling allows an
end station (or network node, such as a router) to communicate with its neighbors to request
specific treatment for a given traffic type. This type of QoS allows for end-to-end QoS in
the sense that the original end station can make a request for special treatment of its packets
through the network, and that request is propagated through every hop in the packet’s path
to the destination. True end-to-end QoS requires the participation of every networking
device along the path (routers, switches, and so forth), and this can be accomplished with
QoS signaling.
In 1994, RFC 1633 first defined the IntServ model. The following text, taken from RFC
1633, provides some insight as to the original intent of IntServ:
We conclude that there is an inescapable requirement for routers to be able to reserve resources, in order to
provide special QoS for specific user packet streams, or “flows”. This in turn requires flow-specific state in
the routers, which represents an important and fundamental change to the Internet model.
As it turns out, the requirement was not as inescapable as the engineers who authored RFC
1633 originally thought, as evidenced by the fact that the Internet still relies almost entirely
on BE delivery for packets.
IntServ Operation
Resource Reservation Protocol (RSVP), defined by RFC 2205, is a resource reservation
setup protocol for use in an IntServ environment. Specifics of operation are covered shortly,
but the general idea behind RSVP is that Bob wants to talk to Steve, who is some number
of network hops away, over an IP video conferencing (IPVC) system. For the IPVC conver-
sation to be of acceptable quality, the conversation needs 384 kbps of bandwidth.
Obviously, the IPVC end stations don’t have any way of knowing whether that amount of
bandwidth is available throughout the entire network, so they can either assume that
bandwidth is available (and run the risk of poor quality if it isn’t) or they can ask for the
bandwidth and see whether the network is able to give it to them. RSVP is the mechanism
that asks for the bandwidth.
The specific functionality is probably backward from what you would guess, in that the
receiver is the one who actually asks for the reservation, not the sender. The sender sends a
Path message to the receiver, which collects information about the QoS capabilities of the
intermediate nodes. The receiver then processes the Path information and generates a
Reservation (Resv) request, which is sent upstream to make the actual request to reserve
resources. When the sender gets this Resv, the sender begins to send data. It is important to
note that RSVP is a unidirectional process, so a bidirectional flow (such as an IPVC)
requires this process to happen once for each sender. Figure 1-5 shows a very basic example
of the resource reservation process (assuming a unidirectional flow from Bob to Steve).
Overview of Integrated and Differentiated Services 15
Figure 1-5 Path and Resv Messages for a Unidirectional Flow from Bob to Steve
Resv message is
received, indicating
resources are reserved.
Data flow from Bob
begins.
Path message from
Bob is processed and
a Resv message is
generated.
Router A
Switch B
Switch A
Router B
Steve
Bob
Path Message
Resv Message
16 Chapter 1: Quality of Service: An Overview
The other major point to note about RSVP is that RSVP doesn’t actually manage the reser-
vations of resources. Instead, RSVP works with existing mechanisms, such as Weighted
Fair Queuing, to request that those existing mechanisms reserve the resources.
Although RSVP has some distinct advantages over BE and, in some cases, over differen-
tiated services, RSVP implementations for end-to-end QoS today are predominantly
limited to small implementations of video conferencing. That said, RSVP is making a
strong comeback and some very interesting new things (beyond the scope of this book) are
on the horizon for RSVP. If you’re interested in a little light reading on the subject, have a
look at RFCs 3175, 3209, and 3210.
Definition of DiffServ
To define differentiated services (DiffServ), we’ll defer to the experts at the IETF. The
following excerpt is from the “Abstract” section of RFC 2475:
This document defines architecture for implementing scalable service differentiation in the Internet. This
architecture achieves scalability by aggregating traffic classification state which is conveyed by means of
IP-layer packet marking using the DS field [DSFIELD]. Packets are classified and marked to receive a
particular per-hop forwarding behavior on nodes along their path. Sophisticated classification, marking,
policing, and shaping operations need only be implemented at network boundaries or hosts. Network
resources are allocated to traffic streams by service provisioning policies which govern how traffic is
marked and conditioned upon entry to a differentiated services-capable network, and how that traffic is
forwarded within that network. A wide variety of services can be implemented on top of these building
blocks.
To make that definition a little less verbose: The differentiated services architecture is
designed to be a scalable model that provides different services to different traffic types in
a scalable way. It must be possible to tell packets of one type from another type to provide
the DiffServ, so techniques known as packet classification and marking are used. After
packets of different types have been marked differently, it’s possible to treat them differ-
ently based on that marking at each hop throughout the network, without having to perform
additional complex classification and marking.
DiffServ Operation
DiffServ is a complicated architecture, with many components. Each of these components
has a different purpose in the network and, therefore, each component operates differently.
The major components of the DiffServ architecture perform the following tasks:
Packet classification
Packet marking
Congestion management
Congestion avoidance
Traffic conditioning
Overview of Integrated and Differentiated Services 17
These mechanisms can be implemented alone or in conjunction with each other. Figure 1-6
shows a possible implementation of these mechanisms in a production network.
Figure 1-6 An Example of the Implementation of Various DiffServ Components
The sections that follow describe the five major components of the DiffServ architecture in
greater detail.
Traffic is classified
using complex criteria
and marked
with various
DSCP values
Traffic leaves
the Layer 3
switch with
DSCP values set
FTP traffic (based
on the DSCP
value) is
policed to 128 kbps
Layer 3
Switch
Traffic Flow
from user to
unspecified
destination
Traffic is classified using
the DSCP value, and
placed into the
appropriate egress queue
Due to the speed mismatch
between the head-end and
tail-end, Traffic Shaping
controls the rate at which
traffic is sent downstream
Traffic arrives with the
original DSCP
markings
DSCP markings are used by
WRED to perform congestion
avoidance
T1
256 kbps
T1
256 kbps
Frame Relay
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset