250 Chapter 8: QoS Support on the Catalyst 6500
Chapter 2 “End-to-End QoS: Quality of Service at Layer 3 and Layer 2,” introduces many
of the QoS features and capabilities found on the Catalyst 6500. This current chapter
expounds on these concepts and discusses how these features specifically relate to the
Catalyst 6500. The chapter opens with an architectural overview of the Catalyst 6500, and
then discusses the hardware and software requirements necessary to support QoS. The QoS
discussion ensues with a quick demonstration of enabling QoS on the platform, immedi-
ately followed by the features outlined in the following list:
Input Scheduling
Classification and Marking
Mapping
Policing
Congestion Management and Congestion Avoidance
Automatic QoS
This chapter focuses on the campus LAN aspect of QoS for the Catalyst 6500 and addresses
how the various QoS mechanisms function on this Catalyst switch without focusing on the
role of the Multilayer Switch Feature Card (MSFC) or FlexWAN. Numerous configuration
examples are provided, reinforcing the concepts discussed. CatOS versions 6.3, 6.4, and 7.5
and Cisco IOS version 12.1(13)E were used to configure the examples. The command refer-
ences demonstrate how to configure the various QoS capabilities using both CatOS (Hybrid
mode) and Cisco IOS (Native mode).
The Catalyst 6500 Family consists of both the Catalyst 6000 and Catalyst 6500. Despite
some architectural differences, the Catalyst 6500 encompasses all features available on the
Catalyst 6000. The following section lists the critical hardware components essential to
QoS operation on the Catalyst 6500.
NOTE The Catalyst 6000 chassis is “end-of-sale.” However, the QoS mechanisms described in this
chapter apply to both the Catalyst 6000 and the 6500 chassis. QoS capabilities depend on
the installed modules.
Catalyst 6500 Architectural Overview
This section introduces the various components relevant to QoS operation on the Catalyst
6500. The section covers hardware resources and terms presented throughout the chapter.
Figure 8-2 shows the QoS architecture of the Catalyst 6500. The ingress and egress ports
depict the queuing architecture found on more recent Gigabit Ethernet ports. The figure
demonstrates the order in which the QoS functions occur and also denotes which switch
components are responsible for the different mechanisms.
Catalyst 6500 Architectural Overview 251
Incorporating an MSFC and a FlexWAN line module further enhances the platform’s QoS
support. With these modules, additional QoS features include traffic shaping, Low Latency
Queuing, Class-Based Weighted Fair-Queuing, and complex traffic classification based on
Layer 4 through Layer 7 application recognition. For information about QoS in conjunction
with the MSFC and the FlexWAN module, see Chapter 9, “QoS Support on the Catalyst
6500 MSFC and FlexWAN.
Figure 8-2 Overview of QoS on the Catalyst 6500 Family Architecture
Both the Catalyst 6000 and the Catalyst 6500 chassis utilize a 32-Gbps bus for communi-
cation with non-fabric-enabled modules. Non-fabric-enabled linecards denote modules
only capable of accessing the 32-Gbps bus architecture available on the Catalyst 6000 and
Catalyst 6500. The 32-Gbps bus is referred to as the data bus or D-bus. The D-bus trans-
ports all frames between the various linecards. All information traversing the D-bus is
viewed by all modules, including the supervisor engine. In addition to the D-bus, the
Catalyst 6500 utilizes two additional buses, the results bus (R-bus) and the control bus or
Ethernet out-of-band channel (EOBC). The R-bus forwards the appropriate rewrite infor-
mation from the supervisor engine to the individual port application-specific integrated
circuits (ASICs). The rewrite data includes the destination MAC and egress port infor-
mation, and any QoS classification or policing policies applied to the frame. The CoS
values derived from these QoS mechanisms may differ from the original class of service
(CoS) value present when the frame entered the switch. The EOBC provides a conduit for
the supervisor engine to perform system management functions with the various linecards.
Scheduling:
Queue and Threshold
Select Based on received
CoS Through
Configurable Map IF
TRUST-COS or TRUST-
EXT
Received CoS can be
overwritten IF
UNTRUSTED
Policing via ACLs
Police Action:
• Mark
• Drop
Based on:
•Byte rate
•Burst
(Token Bucket)
Scheduling:
Queue and
Threshold
Select
Based on
CoS
Through
Configurable
Map
Dequeueing
Uses WRR
Between Two
Queues
RX
Priority Q
ARB
Input
Port
In-Coming
Encapsulation
Can Be 802.1Q,
802.1p, ISL,
or None
Forwarding
Engine
Output
Port
Rewrites
TOS
Field
in IP
Header and
802.1p/ISL
CoS Field
Each Queue
Has
Configurable
Size and
Thresholds,
Some Have
WRED
BA = Behavior Aggregate
MF = Multi-Field
Out-Going
Encapsulation
Can be 802.1Q,
ISL,
or None
Classify Police
Queue 1
Priority Q
Queue 2
Rewrite
TX
WRR
ARB
Queue
DSCP Based Classification
MF Classifying based on
• Port "Trusted"
• Layer 2 info with ACL
• Layer 3 info with ACL
• Layer 4 info with ACL
BA Classifying
Trust-cos/ipprec/dscp with ACL
252 Chapter 8: QoS Support on the Catalyst 6500
Again, the bus architecture facilitates communication between traditional non-fabric-
enabled modules, as well as between non-fabric-to-fabric-enabled modules.
The Catalyst 6500, unlike the Catalyst 6000, offers additional connectivity to a 256-Gbps
crossbar switching fabric, accessible by incorporating a switch fabric module (SFM) and
fabric-enabled linecards. Fabric-enabled cards are modules with the capability to connect to
the crossbar fabric. The crossbar fabric provides high-speed communication between fabric-
capable linecards only. Accessibility to the fabric is the fundamental difference between the
Catalyst 6000 and the Catalyst 6500 from an architectural perspective. Incorporating a switch
fabric module significantly increases the packet-forwarding rate relative to the traditional bus
architecture. Table 8-2 lists the various modules and their connectivity to the backplane.
The MSFC and the Policy Feature Card (PFC) daughter cards of the Catalyst 6500 super-
visor engines are responsible for the Layer 3 routing of packets through the switch. The
MSFC and PFC have specialized functions. Not only does the PFC switch Layer 2 frames,
it also routes Layer 3 packets based on Layer 3 information provided by the MSFC. Conse-
quently, the MSFC’s function is to build the Layer 3 routing and Address Resolution
Protocol (ARP) tables, which may subsequently be passed to the PFC, in addition to
handling any Layer 3 routing functionality the PFC cannot perform. Because the Layer 3
forwarding performance for the MSFC is substantially less than the PFC, the intent is not
to forward all the packets using the MSFC. The goal of the Catalyst 6500 architecture is to
Layer 3 route all packets via the PFC.
For CEF-based systems, the MSFC passes Layer 3 forwarding information through the
EOBC to the appropriate ASICs on the PFC2. Mulilayer switching (MLS)-based systems
do not use the EOBC to forward Layer 3 information. Instead, when a switched path is
completed by the MSFC, a flow is created and cached and the PFC1 forwards all subse-
quent packets for that flow. Cisco Express Forwarding (CEF) and MLS allow the PFC to
switch packets in hardware.
Layer 3 forwarding depends on the supervisor engine. For Supervisor I Engines, the
forwarding information is based on MLS flows, whereas the PFC2 on Supervisor II Engines
utilizes hardware-based CEF. MLS operation is covered in Chapter 4, “QoS Support on the
Catalyst 5000 Family of Switches,” in the section titled “MLS Fundamentals.” Regardless
of the Layer 3 switching method, QoS features are applied to the packet prior to it being
Layer 3 switched.
NOTE For additional information on MLS-based switching on the Supervisor I Engine, refer to
the following technical document at Cisco.com:
“Configuring IP Unicast Layer 3 Switching on Supervisor Engine 1”
For additional information on CEF-based switching on the Supervisor II Engine, refer to
the following technical document at Cisco.com:
“Configuring CEF for PFC2”
Catalyst 6500 Architectural Overview 253
In addition to the PFC, which is centrally located on the supervisor engine, fabric-enabled
linecards may incorporate a Distributed Forwarding Card (DFC). At the time of writing,
the WS-X6816 linecard is the only module that comes equipped with a DFC by default.
However, other fabric-enabled linecards can be upgraded to accept a DFC. The DFC is a
daughter card that sits on a fabric-enabled linecard. The DFC’s architecture and operation
is exactly the same as the PFC2. As a result, the DFC is capable of performing distributed
Layer 3 CEF-based forwarding, Layer 2 bridging, access-control lists (ACLs), and QoS.
Therefore, by adding a DFC, forwarding decisions are localized to the linecard. Again,
similar to the PFC2, the MSFC is responsible for building the CEF information that is
distributed out to the DFC.
Finally, the ternary content addressable memory (TCAM) is a finite portion of memory
resident on the PFC1 and PFC2. The TCAM is essentially a table that stores ACL entries
and masks used to apply defined QoS policies. The TCAM allows multiple access-control
entries (ACEs) to share a single mask. Storing the ACL information in memory on the PFC
ensures high-speed lookups are performed, and thus maximizes the throughput and
minimizes the latency for processing packets and applying QoS policies. Because QoS
ACL lookups are performed in hardware, applying the policies results in no impact to
system switching performance. TCAM is discussed in more detail in Chapter 6, “QoS
Features Available on the Catalyst 2950 and 3550 Family of Switches,” as well as later in
this chapter.
For more detailed information regarding the architecture of the Catalyst 6500, consult the
following technical document at Cisco.com:
“Catalyst 6000 and 6500 Series Architecture”
Software and Hardware Requirements
QoS feature support for the Catalyst 6500 began with software version 5.1. Initially, with
the Supervisor I Engine, the platform was limited in QoS functionality and only capable of
performing Layer 2 functions. Specifically, it was only capable of supporting port-based
CoS, as well as assigning a CoS based on destination MAC address. With the introduction
of the PFC1 in CatOS Software version 5.3, QoS support on the Catalyst 6500 broadened
to include policing, marking, and classification based on QoS ACLs for IP, IPX, and MAC
layer traffic. QoS is also fully supported in Cisco IOS (Native mode). Initially, QoS support
only included IP traffic in the first Cisco IOS Software version release 12.0(7)XE. Marking,
policing, classification and congestion avoidance were the features included for IP traffic.
Cisco IOS release 12.1(1)E expanded QoS support for Native mode to include IPX and
MAC layer traffic. Table 8-1 depicts the different QoS processes covered in this chapter.
254 Chapter 8: QoS Support on the Catalyst 6500
The table specifies the hardware responsible for the different operations and the software
capable of supporting the various features.
*Input queue scheduling is contingent on the trust state of the inbound port. If the inbound trust policy is set for
untrusted, traffic is sent to the default queue and is serviced FIFO.
As demonstrated in Table 8-1, the port ASICs on the linecards play a significant role in the
end-to-end QoS implementation within the Catalyst 6500. Table 8-2 shows the different
modules available for the platform and the default queuing architecture for both receive and
transmit ports.
Table 8-1 Hardware Support for QoS
QoS Operation Hardware Responsible for QoS Operation
Supported
Software
Input queue scheduling* Linecards (port ASIC)—PFC not required CatOS/Cisco IOS
Classification Supervisor—Responsible for Layer 2 (CoS)
PFC—Responsible for Layer 2 and 3 (CoS/IP
precedence/DSCP)
CatOS/Cisco IOS
Policing Layer 3 switching engine in PFC CatOS/Cisco IOS
Marking/rewrite Linecards (port ASIC)—Based on classification/
policing performed by supervisor or PFC
CatOS/Cisco IOS
Output queue
scheduling
Linecards (port ASIC)—Based on priorities
established during classification/policing
CatOS/Cisco IOS
Table 8-2 Overview of Modules Supporting QoS
Module
Linecard
Composition
Receive
Ports
Transmit
Ports
Priority
Queue
Architecture
to Backplane
WS-X6K-Sup1 2 X 1000 1q4t 2q2t No Bus
WS-X6K-Sup1A 2 X 1000 1p1q4t 1p2q2t RX/TX Bus
WS-X6K-Sup2 2 X 1000 1p1q4t 1p2q2t RX/TX Bus/Fabric
WS-X6024 24 X 10 1q4t 2q2t No Bus
WS-X6148 48 X 10/100 1q4t 2q2t No Bus
WS-X6224 24 X 100 1q4t 2q2t No Bus
WS-X6248 48 X 10/100 1q4t 2q2t No Bus
WS-X6316 16 X 1000 1p1q4t 1p2q2t RX/TX Bus
WS-X6324 24 X 100 1q4t 2q2t No Bus
WS-X6348 48 X 10/100 1q4t 2q2t No Bus
WS-X6408 8 X 1000 1q4t 2q2t No Bus
WS-X6408A 8 X 1000 1p1q4t 1p2q2t RX/TX Bus
WS-X6416 16 X 1000 1p1q4t 1p2q2t RX/TX Bus
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset