Cisco AVVID 11
like this, it is difficult to get a true idea about how much bandwidth is required. You could
upgrade to a 1-Gbps link, of course, but that confirms the fact that it’s going to get expensive
very quickly to follow that theory.
Still another problem helps make the case for QoS in the LAN. That problem is interactive
traffic, such as voice and video conferencing. With most data traffic, there is no concern
about jitter and little concern about delay, but that isn’t the case with voice and video
conferencing traffic. These real-time applications have special requirements with regard to
delay and jitter that are just not addressed by adding more bandwidth. Even with abundant
bandwidth, it is still possible that the packets of a voice flow could experience jitter and
delay, which would cause call quality degradation. The only way to truly ensure the delay
and jitter characteristics of these flows is through the use of QoS.
Cisco AVVID
As the incentives became greater and greater to migrate away from separate networks for
data, voice, and video in favor of a single IP infrastructure, Cisco developed the Archi-
tecture for Voice, Video, and Integrated Data (AVVID), which provides an end-to-end
enterprise architecture for deploying Cisco AVVID solutions. These solutions enable
networks to migrate to a pure IP infrastructure; Cisco AVVID solutions include the
following:
IP telephony
IP video conferencing
MxU (that is, multi-tenant, hospitality, and so forth)
Storage networking
Virtual private networks
Content networking
Enterprise mobility
IP contact center
The idea behind the AVVID architecture is that an enterprise environment can’t possibly
keep up with every emerging technology and adjust quickly to the individual changes in
specific application deployments. With the AVVID architecture, Cisco provides a
foundation of network engineering that allows an enterprise environment to quickly adapt
to changes in all of these areas. In addition to the physical layer, the AVVID architecture
also consists of the intelligent network services (such as QoS) that are necessary to
transform a traditional data network into an advanced e-business infrastructure that
provides customers with a competitive advantage.
AVVID is not a single mechanism or application; instead, it is an overall methodology that
enables customers to build a converged network and adapt quickly to the ever-changing
demands placed on that network. The requirements for IP-based voice and video, for
example, may be different from the requirements for the next x-over-IP requirement.
12 Chapter 1: Quality of Service: An Overview
QoS in the AVVID Environment
The foundation for the AVVID architecture is the assumption that all services (including
VoIP) use a common infrastructure. The network requirements of VoIP traffic differ from
those of a regular data flow (such as FTP). An FTP flow, for instance, requires a large
amount of bandwidth, is very tolerant to delay and packet loss, and couldn’t care less about
jitter. Conversely, VoIP takes a relatively tiny amount of bandwidth, is very sensitive to
packet loss, and requires low delay and jitter. By treating these two flows the same on your
network, neither would be likely to get ideal service, and the FTP traffic could ultimately
dominate the link, causing poor call quality for your VoIP.
For this reason, QoS is one of the cornerstones of the Cisco AVVID. Without QoS applied
to the converged links in a network, all packets receive the same treatment and real-time
applications suffer. Many QoS considerations exist in a Cisco AVVID environment, but the
primary things that all QoS mechanisms are concerned with are constant: bandwidth, delay,
jitter, and packet loss.
VoIP environments have multiple requirements. Assume that there is a T1 link between two
branch offices, and you have determined that you can spare enough of that link for three concurrent
VoIP calls. Figure 1-3 shows the minimum QoS mechanisms that you would configure.
Figure 1-3 Minimum QoS Mechanisms for VoIP in a Specific AVVID Environment
If you modify that assumption, only slightly, the required mechanisms change. The changes
are not dramatic, but call quality will certainly suffer if they are not made. Assume that the
topology now is Frame Relay, rather than a point-to-point connection, with one end at full
T1 speed and the other end at 384 kbps. Figure 1-4 shows the additional mechanisms that
would be used.
Call Admission Control:
Limits the number of calls
allowed to use this link to
three, so that you do not
exceed the allocated VolP
bandwidth.
Packet Classification & Marking:
Match VolP packets based on Source/
Destination and UDP Port Range.
Low Latency Queuing:
Provide enough bandwidth for three VolP
calls, with a promise of low latency, low
jitter, and low packet loss.
T1 to other Branch Office
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset