8
Architectures with Shared-Memory-Based Switch Fabrics: Case Study—Cisco Catalyst 3550 Series Switches

8.1 Introduction

The Cisco Catalyst 3550 Series are fixed configuration, stackable switches that employ a distributed shared memory switch fabric architecture. The architecture of the Catalyst 3550 is based on the older Catalyst 3500 XL Layer 2 (only) switch [CISC3500XL99]. The Catalyst 3500XL switch is one of the switches that belongs to the Catalyst “XL” family of switches that includes the Catalyst 2900XL and Catalyst 2900XL LRE. The XL family of switches are strictly Layer 2 switches, with no Layer 3 capabilities beyond the simple functions provided by the management interface (Telnet, SNMP, etc.).

The Catalyst 3550 switches are enterprise-class switches that support Layer 2 and 3 forwarding as well as quality of service (QoS) and security features required in many of today's networks. The Catalyst 3550 Series support a range of 100 Mb/s Ethernet and Gigabit Ethernet interfaces that allow them to serve as access layer switches for medium enterprise wiring closets and as backbone switches for medium-sized networks.

In the Catalyst 3550 [CISC3550DS05, CISC3550PRS03, CISCRST2011], all Layer 2 and 3 forwarding decisions are performed in network interfaces modules ASICs (referred to as satellite ASICs). The Layer 2 and 3 forwarding decisions in some cases involve processing Layer 4 parameters of the arriving packets. The network satellite ASICs manage either a group of 100 Mb/s Ethernet ports or a single Gigabit Ethernet (GbE) port. A central CPU in the Catalyst 3550 is responsible for running the Layer 2 and 3 protocols, routing table management, and overall system control and management.

The Catalyst 3550-12 T/12G has a 24 Gb/s switch fabric capacity and supports 17 million packets per second (Mpps) throughput, Catalyst 3550-24 has 8.8 Gb/s capacity with 6.6 Mpps throughput, and Catalyst 3550-48 has a 13.6 Gb/s capacity with 10.1 Mpps throughput. The Catalyst 3550-24 supports a 2 MB shared memory shared by all switch ports, 64 MB RAM, 16 MB Flash memory, storage of 8000 MAC addresses, 16,000 unicast routes, 2000 multicast routes, and maximum transmission unit (MTU) of 1546 bytes for MPLS forwarding.

Based on architecture categories described in Chapter 3, the architecture discussed here falls under “Architectures with Shared-Memory-Based Switch Fabrics and Distributed Forwarding Engines” (see Figure 8.1).

Figure depicts architecture with shared-memory-based switch fabric and distributed forwarding engines.

Figure 8.1 Architecture with shared-memory-based switch fabric and distributed forwarding engines.

8.2 Main Architectural Features of the Catalyst 3550 Series

In the Catalyst 3550, a switching and packet forwarding subsystem supports a shared memory switch fabric ASIC that manages transactions between a centralized 4 MB shared memory buffer and a number of network interface modules (referred to as network satellites in the Catalyst 3550) interconnected in a radial design (Figure 8.2). The Catalyst 3550 supports 10 Gb/s of raw bandwidth capacity between the shared memory switch fabric ASIC and the shared (data) memory buffer. This yields a data forwarding rate of 5 Gb/s in one direction.

Figure depicts catalyst 3550 switch/router high-level architecture.

Figure 8.2 Catalyst 3550 switch/router high-level architecture.

In the Catalyst 3550, the network satellites provide the interfaces to the external network. Each satellite performs the address lookups and forwarding for incoming packets using its own address table. The Catalyst 3550 supports two network satellite types (octal 10/100 Ethernet satellite and a single-port Gigabit Ethernet satellite) and each satellite handles all addressing operations for incoming traffic. The network satellites communicate with each other by sending notification messages over the notify ring, which is more efficient than traditional bus architectures, potentially delivering up to 10 million frame notifications per second.

Depending on the traffic load, a network satellite is allowed to use (dynamically) all or some of this shared memory switching bandwidth. All incoming packets pass through the shared memory switch fabric ASC and are stored in the shared memory data buffer. A shared memory architecture eliminates the “head-of-line” blocking problems normally associated with pure input-buffered architectures.

The Catalyst 3550 supports radial (store/receive) channels that connect the shared memory switch fabric ASIC and the network satellites. Each channel provides 200 Mb/s bandwidth in each direction resulting in a total full-duplex channel capacity of 400 Mb/s between each satellite and the switch fabric ASIC.

8.3 System Architecture

Figure 8.2 presents a high-level architecture of the Catalyst 3550 series of switch/routers. This architecture was developed to strike a good balance between obtaining maximum packet forwarding performance in hardware and software design flexibility. A more detailed presentation of the architecture is given in Figure 8.3.

Figure depicts catalyst 3550 switch/router architecture details.

Figure 8.3 Catalyst 3550 switch/router architecture details.

8.3.1 Packet Forwarding Subsystem

At the core of the Catalyst 3550 switch/router architecture is the switching and packet forwarding subsystem (see Figures 8.2 and 8.3). This subsystem consists of the shared memory switch fabric ASIC, network satellites (module port and octal Ethernet satellites) that act as network interface modules, shared data memory buffer, and notify ring.

8.3.1.1 Switching and Forwarding Engines

The switching and forwarding engines implemented in the network satellites handle the primary packet forwarding functions, including receiving and transmitting user data traffic. The switching and forwarding engines provide low-latency, high-performance Layer 2 ad 3 forwarding and allow all destination address lookups to be performed entirely in (distributed) network satellites. The initial implementations of the Catalyst 3550 switch/router architecture supported 10/100 Mb/s and Gigabit Ethernet ports.

8.3.1.2 Shared Memory Switch Fabric ASIC

The shared memory switching fabric ASIC (Figure 8.3) is responsible for managing its associated shared data buffer and buffer table. The buffer table maintains addressing information used by the shared data buffer. The 10 Gb/s link interconnecting the shared memory switch fabric ASIC and the shared data buffer provides a 5 Gb/s forwarding rate.

The radial channels (Figures 8.2 and 8.3) that connect network satellites to the shared memory switch fabric ASIC distribute the total available system bandwidth among the network satellites. The radial channels are designed to minimize the number of pins needed per data to maximize system reliability and lower cost.

8.3.1.3 Shared Data Buffer

The shared data buffer is a key component of the shared-memory-based Catalyst 3550 architecture. The shared data were based on a 4 MB DRAM in the initial deployment of the Catalyst 3550. A shared data buffer architecture allows the Catalyst 3550 to optimize buffer utilization (especially under varying network traffic loads) through dynamic buffer allocation to all the system ports. The shared data buffer also allows the system to avoid duplicating multicast or broadcast packets to the destination ports.

The shared data buffer bandwidth also provides an efficient use of memory bandwidth and storage capacity. The shared buffer allows designers to reduce the total amount of memory required in the switch/router, while providing high nonblocking performance. All incoming packets are temporarily stored in a common “memory pool” until the destination ports are ready to read and transmit the packets. Being a shared resource, heavily loaded destination ports can consume as much memory as they need, while lightly loaded ports do not have to hog unused memory space.

The shared memory also allows larger bursts of traffic from a port than corresponding port-buffered architectures. With a good combination of adequate buffering with dynamic allocation, this architecture effectively eliminates or reduces significantly packet loss during traffic overload due to limited buffer capacity. Similarly, with adequate buffering and dynamic buffer allocation, the system avoids head-of-line blocking problems normally associated with input-buffered architectures without per output port buffers (also known as virtual output queues (VoQs)).

Unlike input-buffered architectures with VoQs (that must store multiple copies of a multicast or broadcast packet (one in each destination VoQ)), shared buffer architectures increase the overall system performance by eliminating the replication of multicast and broadcast packets. The shared memory switch fabric ASIC maintains logical queues in the buffer table that are dynamically linked to transmit queues for each destination port, with multiple references to the same buffer location for a multicast or broadcast packet. A multicast packet (which is destined for multiple destination system ports and network addresses) is stored in the same shared memory location until all destination ports have read and forwarded their copies of the packet.

8.3.1.4 Network Satellites

The network satellites provide connectivity to the external network and also manage media interfaces to the network. The satellites transfer and receive packets from the shared data buffer and perform Layer 2 and 3 destination address lookups in their local forwarding tables. The network satellites are also responsible for 10/100 Mb/s Ethernet (and other media) MAC (Media Access Control) protocol functions, determining source/destination Layer 2 (MAC) addresses of incoming packets, updating and synchronizing the local Layer 2 (address) tables, and supporting up to 200 Mb/s full-duplex data transfer from each network port to the shared memory switch fabric ASIC.

To ensure data integrity, the local address table in any network satellite is synchronized with the tables in other network satellites via the notify ring. When a packet arrives at a network satellite, it converts the incoming packet into fixed-length cells and transfers them to the shared memory switch fabric ASIC for storage in the shared data buffer. At the same time, the source network satellite performs a destination address lookup in its local forwarding table and notifies the destination ports via the notify ring interconnecting the network satellites.

The destination port receives the notification, and then reads and reconverts the cells belonging to the outgoing packet into a complete packet before forwarding it out of the port. The types and number of network satellites employed in the Catalyst 3550 vary depending on its implementation. Each 100 Mb/s Ethernet network satellite can support up to eight independent 10/100BASE-T Ethernet ports (referred to as octal Ethernet module in Figure 8.3), while each Gigabit Ethernet network satellite (i.e., the module port satellite in Figure 8.3) supports only one 1000BASE-X Ethernet port.

8.3.1.5 Notify Ring

The notify ring carries notifications between network satellites and also management information for the synchronization of the address tables, confirmation of packet arrival at a satellite, notification of packet retrieval by a satellite, and other operational related activities among the satellites. The notify ring provides an effective way to off-load communications between the network satellites (i.e., “housekeeping” tasks) to a dedicated “out-of-band” channel associated with the existing switch fabric. This approach offsets the packet forwarding performance degradation that may have occurred if the switch/router had integrated all these functions into a system common channel.

Each packet notification message contains a queue map that is read by each network satellite and after which it is forwarded on the ring to the next module. This notification message may carry information about packet type, its queuing priority, and so on. With this, the amount of queue numbers in a notification message can exceed the number of ports in a system. When a notification message carries information relevant to a particular network satellite, it modifies the message in response and then forwards it on the notify ring.

The notify ring is designed to be an 800 Mb/s, 8 bit unidirectional communication ring interconnecting all the network satellites. The notify ring has a notification message size of 10 bytes per packet, thus resulting in the Catalyst 3550 supporting up to 10 million packet notifications per second.

8.3.1.6 Radial Channels

The shared memory switch fabric ASIC communicates with other components in the system and the network satellites through the radial channels. The number of radial channels varies according to the Catalyst 3550 switch/router design, but each radial channel consists of four unidirectional signal pathways (subchannels). Two signal pathways are used for incoming data storage and two signal pathways for outgoing data retrieval. Each signal pathway set also carries all in-band signaling. Each radial channel can support up to 200 Mb/s of data in each direction simultaneously. Excluding control and overhead traffic, a typical radial channel has approximately 160 Mb/s of full-duplex payload capacity.

8.3.1.7 Format Conversions

Data are stored in the shared data buffer, read, and moved across the radial channels in fixed-length cells. The network satellites are responsible for carrying out the conversion of incoming packets to be transported and stored in the shared data buffer. The fixed-length cells make transfer and storage more predictable and enable the switch/router manage the shared data buffer more efficiently. A header attached to data is read and interpreted by the shared memory switch fabric ASIC during storage (in the shared data buffer) and by the network satellite during data retrieval.

The data headers identify the origin of a frame (packet) and its boundaries, the number of expected reads/retrievals (from memory), and other information needed for handling the packet. When storing a cell carrying a segment of a packet in the shared data buffer, the shared memory switch fabric ASIC reads the data header to create a temporary address entry in the buffer table.

8.3.1.8 Destination Address Lookup

When a network satellite receives a packet, it stores it (via the shared memory switch fabric ASIC) in the shared data buffer. The network satellite performs a lookup in its local address table to determine the packet's destinations. The source network satellite then notifies the destination satellites by sending notifications over the notify ring. The packet is segmented into cells, each one with a header when stored in the shared data buffer. The number of destinations for the packet is contained in the cell header and includes a retrieval count (that indicates which destinations have copied the cell so far).

When a destination network satellite receives a notification message from the source satellite, it reads and appends the information sent to its local notify queue. The destination satellite then signals the source satellite that it is ready to retrieve and forward the cells that make up the packet. Once retrieved, the destination satellite reassembles the cells into the full packet and forwards it through the appropriate local ports. If a destination network satellite is not able to accept more packets, the source network satellite notifies the shared memory switch fabric ASIC, which adjusts (e.g., delete) the entry in the buffer table for each packet that cannot be sent.

8.3.2 Supervisor Subsystem

Figure 8.4 shows the architecture of the supervisor subsystem of the Catalyst 3550 switch/router. The supervisor subsystem connects to the shared memory fabric ASIC of the Catalyst 3550 via a supervisor interface satellite as illustrated in Figure 8.3. This subsystem contains a control CPU, Flash memory, DRAM, system input/output (I/O) interfaces, PCI bridge, and serial (RS-232) interface ports (for system management). The supervisor subsystem supports higher level protocols and applications used to control, monitor, and manage the overall Catalyst 3550 switch/router.

Figure depicts the architecture of the supervisor subsystem of the Catalyst 3550 switch/router.

Figure 8.4 Supervisor Subsystem–CPU interface satellite.

The various components of the supervisor subsystem are described as follows.

8.3.2.1 Control CPU

The control CPU is a 32 bit PowerPC RISC processor that provides Layer 3 functions such as routing protocol processing, Layer 2 functions (e.g., Rapid Spanning Tree Protocol (RSTP), IEEE 802.1AB Link Layer Discovery Protocol (LLDP), and VLAN Trunking Protocol (VTP)), Layer 3 routing table construction and maintenance, Layer 2 address table maintenance, connection management, and network management functions.

When the switch/router is powered on, the control CPU automatically initiates a self-diagnosis of the systems and other system control tasks. The Catalyst 3550 supports management features such as SNMP, Telnet, Cisco Visual Switch Manager (CVSM), and command-line interface (CLI). The Catalyst 3550 supports four groups of RMON and security features.

8.3.2.2 Supervisor Interface Satellite

The supervisor interface satellite provides connectivity between the supervisor subsystem and the shared memory fabric ASIC of the switch/router. This interface provides a channel between the switch fabric resources and the control CPU and its support components (Flash memory, system I/O interfaces, and serial interface ports). The supervisor interface satellite formats address tables used by network satellites (module port and octal Ethernet satellites).

8.3.2.3 Flash Memory

This is a nonvolatile Flash memory of 4 MB in size used to store the Catalyst 3550 Cisco IOS software image, current switch/router (system) configuration information, and a built-in CVSM software. A true file system with directory structures is supported in the Flash memory that allows easy software upgrades. The Flash memory maintains stored information across power cycles, thus facilitating maximum system reliability.

8.3.2.4 System I/O Interface

The system I/O interfaces are used to provide control and status for various system-level functions such as system status, LED control, an RS-232 (also known as EIA/TIA-232) serial interface (that allows access from a system console device for management purposes), and an external redundant power supply interface.

8.4 Packet Forwarding

In centralized forwarding, a single central forwarding engine is used that performs all forwarding operations (Layer 2, Layer 3, QoS, ACLs, etc.) for the system. Here, the system performance is determined by the performance of the central forwarding engine. In a distributed forwarding architecture like the Catalyst 3550, the switching and forwarding decisions are made at module or port level with local forwarding engines and forwarding tables.

These distributed forwarding tables are synchronized across all the distributed forwarding engines to allow consistent forwarding decisions in the system. The overall system performance is equal to the aggregate performance of all forwarding engines in the system. Distributed forwarding allows switches, switch/routers, and routers to achieve very high packet forwarding performance.

In flow-based forwarding (also known as demand-based forwarding), forwarding is based on traffic flows where the first packet is forwarded in software by the route processor. Subsequent packets of the same flow are then forwarded in hardware by forwarding engine ASICs using the flow cache created. A flow can consist of the source address, source/destination addresses, or full Layer 3 and Layer 4 information. The scalability of flow-based forwarding is dependent on the control plane performance. Issues such as the following have to be addressed when implementing flow-based forwarding:

  • How fast the route processor can process new flows and set them up in the forwarding engine hardware?
  • How network topology changes (including route flaps, etc.) are handled and managed in the flow cache?
  • Given that the route processor is responsible for control plane functions, the other tasks it is responsible for (other than routing protocols, ARP, spanning tree, etc.) can affect the processing power devoted to flow processing.
  • The stability of the critical routing protocols processes, while flows are being established in the route processor, has to be ensured.

The Catalyst 3550 uses topology-based forwarding tables where the Layer 3 forwarding information is derived from the routing table maintained from the routing protocols and the node adjacencies are derived from the Address Resolution Protocol (ARP) table. In this architecture, the Layer 3 forwarding tables (including the adjacency information) are generated and built from the system control plane.

These tables are installed in the ASIC hardware in the network satellites of the Catalyst 3550. The lookup in the forwarding table is based on destination longest-match prefix search. A forwarding table hit returns an adjacency (next hop IP node), outgoing port, and adjacency rewrite information (next hop MAC address).

In a distributed forwarding system, the scalability of the system is dependent on the forwarding engines' performance and not on flow-based hardware forwarding of first packet in each flow, no matter how many flows exist in the system (whether there are one or one million new flows). In this system, the hardware forwarding tables are identical to software tables maintained by the central route or control CPU.

The hardware forwarding tables are updated by the routing protocol software in the control CPU as network topology changes occur. The control plane is decoupled from normal user traffic forwarding and dedicated to protocol processing (routing protocols, ARP, spanning tree, etc.).

8.4.1 Catalyst 3550 Packet Flow

This section describes the packets forwarding process in the Catalyst 3550. The processing steps are described in Figures 8.5 and 8.6:

  1. A packet arrives from the external network to a port on a network satellite.
  2. The ingress network satellite ASIC makes the relevant Layer 2 or Layer 3 forwarding decisions (plus policing, marking, etc.).
  3. The ingress network satellite parses the packet header from the payload and sends the following:
    1. Header information on the notify ring to the egress ports (this is the control path).
    2. Packet payload to the shared memory switch fabric ASIC for temporary storage in the shared data buffer (this is the data path).
  4. The egress network satellite receives the control information on the notify ring and recognizes that it is one of the destination ports.
  5. The egress network satellite then retrieves the packet from the shared data buffer for all its local destination ports.
  6. The egress network satellite performs packet rewrite (on the relevant Layer 2 and 3 header fields), output ACL filtering and policing, and local multicast expansion.
  7. The egress network satellite transmits the packet out the local egress port(s).
img

Figure 8.5 Packet flow – ingress.

img

Figure 8.6Packet flow – egress.

The Catalyst 3550 uses a TCAM (ternary content-addressable memory) for storing the forwarding information required for forwarding traffic. The available TCAM space is shared among all forwarding entries in the system. Sharing of these forwarding entries is based on predefined templates, where the templates “carve” out the TCAM space to suit the network environment, for example, routing and VLAN.

8.4.2 Catalyst 3550 QoS and Security ACL Support

The Catalyst 3550 and 3750 series switches supports router-based access control lists (RACLs), VLAN-based ACL (VACLs), and port-based ACL (PACL). The Catalyst 3550 supports 256 security ACLs on the 10/100 Ethernet satellites with 1 K security ACEs (access control entries). The security ACLs programmed in TCAM are used for hardware enforcement of security policies.

RACLs can be applied on switch virtual interfaces (SVIs) (see SVIs below), which are routed (Layer 3) interfaces to VLANs on the Catalyst 3550, on physical routed interfaces, and on routed (Layer 3) EtherChannel interfaces. RACLs are applied in specific directions on interfaces (inbound or outbound) where the user can apply one IP ACL in each direction.

VLAN maps (or VACLs) can be applied on the Catalyst 3550 to all packets that are Layer 3 forwarded (routed) into or out of a VLAN or are Layer 2 forwarded within a VLAN. VACLs are mostly used for security packet filtering. PACL can also be applied to Layer 2 interfaces on the Catalyst 3550. PACLs are supported on physical ports/interfaces only and not on EtherChannel interfaces.

The ACLs supported on the Catalyst 3550 are summarized below:

  • Router ACL (RACL)
    • - Applied to routed ports and SVI.
    • - Standard and Extended IP ACLs.
    • - Can be applied to data plane or control plane traffic on all ports.
    • - Filter on Source/Destination MAC address, Source/Destination IP address, and TCP/UDP port numbers.
  • Port ACL (PACL)
    • - Applied to specific switch port.
    • - Filter on Source/Destination MAC address, Source/Destination IP address, and TCP/UDP port numbers.
  • VLAN ACL (VACL)
    • - Applied to all packets either bridged or routed within a VLAN, including all non-IP traffic.
    • - Filter on Source/Destination MAC address, Source/Destination IP address, and TCP/UDP port numbers.
  • ACL Hierarchy: On the ingress interface, the VLAN ACL gets applied first. On the egress interface, it is applied last.
  • Time-Based ACLs: These are security ACLs set for specific periods of the day.

The Catalyst 3550 supports the following QoS features [CISCRST2011, CISCUQoS3550, FROOMRIC03]. Chapters 11 to 13 describe these QoS mechanisms in detail:

  • Scheduling:
    • - Egress scheduling
    • - Strict priority queuing
    • - Egress weighted round-robin (WRR) with weighted random detection (WRED)
  • Traffic Classification and Marking
    • - Based on default port IEEE 802.1Q (sometimes referred to as 802.1p) class of service (CoS) or Layer 2/Layer 3/Layer 4 ACL policies.
    • - 512 QoS ACEs supported on all 10/100 Ethernet configurations.
    • - IEEE 802.1Q (CoS), Cisco inter-switch link (ISL), Differentiated Services Code Point (DSCP), or IP Precedence marking.
  • Rate Policing
    • - Policer support:
      • 128 ingress policers per Gigabit Ethernet port.
      • Eight ingress policers per 100 Mb/s Ethernet port.
      • Eight egress policers per 100 Mb/s and Gigabit Ethernet ports.
    • - Support of per interface and shared aggregate policers.

The header of an ISL frame (which are Layer 2 frames) contains a 1 byte user field with the three least significant bits used to carry an IEEE 802.1p CoS value. Interfaces configured as ISL trunks format and transport all traffic in ISL frames. The header of an IEEE 802.1Q frame (also a Layer 2 frame) contains a 2 byte Tag Control Information (TCI) field with the three most significant bits (called the Priority Code Point (PCP) bits) used to carry the CoS value. Except for traffic in the native VLAN, interfaces configured as IEEE 802.1Q trunks format and transport all traffic in IEEE 802.1Q frames. The ISL and IEEE 802.1Q CoS field take values from 0 for low priority to 7 for high priority.

Cisco ISL was developed as an encapsulation protocol to allow multiple VLANs to be supported over a single link (trunk). With ISL, Ethernet frames are encapsulated with the VLAN information and a 26 byte header and a new 4 byte CRC (at the end of the ISL packet) are added. On the ISL trunk port, all packets received and transmitted are encapsulated with an ISL header. Nontagged or native frames received on an ISL trunk port are discarded.

A native VLAN on an IEEE 802.1Q trunk is the only untagged VLAN (only VLAN that is not tagged in the trunk). Frames transmitted on a switch port on the native VLAN are not tagged. Generally, if untagged frames are received on a switch on a IEEE 802.1Q trunk port, they are assumed to be from a VLAN that is designated as the native VLAN. The native VLAN is not necessarily the same as the management VLAN; they are generally kept separate for better security.

Layer 3 (IP) packets can be marked with CoS values using either IP Precedence or a DSCP marking. The Catalyst 3550 supports the use of either CoS settings because DSCP values are backward-compatible with IP Precedence values. IP Precedence values range from 0 to 7, while those for DSCP range from 0 to 63.

The Catalyst 3550 supports features to classify, reclassify, police, and mark arriving packets before they are stored in the shared data buffer via the shared memory switch fabric ASIC. Packet classification mechanisms allow the Catalyst 3550 to differentiate between the different traffic flows and enforce QoS and security policies based on Layer 2 and Layer 3 packet fields.

To implement QoS and security policies at the ingress, the Catalyst 3550 identifies traffic flows and then classifies/reclassifies these flows using the DSCP or IEEE 802.1Q CoS fields. Classification and reclassification can be based on criteria such as the source/destination IP address, source/destination MAC address, and TCP or UDP ports. The Catalyst 3550 will also carry out policing and marking of the incoming packets. In addition to data plane ACLs, the Catalyst 3550 supports control plane ACLs on all ports to ensure that packets destined to the route processor (i.e., supervisor subsystem) are properly policed and marked to maintain proper functioning of the routing protocol processes.

After the packets are classified, policed, and marked, they are then assigned to the appropriate priority queue before they are transmitted out the switch. The Catalyst 3550 supports four egress priority queues per port (one of which is a strict priority queue), which allows the switch/router to assign priorities for the various traffic types transiting the switch/router. At egress, the Catalyst 3550 performs scheduling of the priority queues and also implements congestion control. The Catalyst 3550 supports strict priority queuing and WRR scheduling (on the remaining three queues).

With the WRR scheduling, the Catalyst 3550 ensures that the three lower priority queued packets are not starved of output link bandwidth and are serviced proportional to the weights assigned to them. Strict priority queuing allows the Catalyst 3550 to ensure that the (single) highest-priority queued packets will always get serviced first, before the other three queues that are serviced using WRR scheduling. In addition to these scheduling mechanisms, the Gigabit Ethernet ports on the Catalyst 3550 support congestion control via WRED. WRED allows the Catalyst 3550 to avoid congestion by allowing the network manager to set thresholds on the three lower priority queues, at which packets are dropped before congestion occurs.

The Cisco Catalyst 3550 supports a Cisco Committed Information Rate (CIR) functionality that is used to perform rate limiting of traffic. With CIR, the Catalyst 3550 can guarantee bandwidth in increments of 8 kbps. Bandwidth guarantees can be configured based on criteria such as source MAC address, destination MAC address, source IP address, destination IP address, and TCP/UDP port numbers. Bandwidth guarantees are an essential component of service-level agreements (SLAs) and in networks the network manager needs to control the bandwidth given to certain users.

Each 10/100 port on the Catalyst 3550 supports eight individual ingress policers (or eight aggregate ingress policers) and eight aggregate egress policers. Each Gigabit Ethernet port on the Catalyst 3550 supports 128 individual ingress policers (or 128 aggregate ingress policers) and 8 aggregate egress policers. This allows the network manager to implement policies with very granular control of the network bandwidth.

8.5 Catalyst 3550 Software Features

The Cisco Catalyst 3550 Series switches support advanced features, such as advanced QoS management and control, rate-limiting ACLs, multicast traffic management, and advanced IP unicast and multicast routing protocols. Supported in the Catalyst 3550 is the Cisco Cluster Management Suite (CMS) Software, which allows network managers to configure and troubleshoot multiple Catalyst switches (switch cluster) using a standard Web browser.

The routing protocols include Routing Information Protocol v1/2 (RIPv1/v2), Open Shortest Path First (OSPF), Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (EIGRP), Border Gateway Protocol version 4 (BGPv4), Protocol Independent Multicast (PIM), Internet Group Management Protocol (IGMP), and Hot Standby Router Protocol (HSRP). The Catalyst 3550 also supports IGMP snooping in hardware, which makes the switch/router effective for intensive multicast traffic environments. Additionally, the Catalyst 3550 supports equal cost routing (ECR) with load balancing on routed uplinks to allow for better bandwidth utilization. The Catalyst 3550 supports mechanism for performing load balancing on the routed uplinks.

Each individual port on the Catalyst 3550 can be configured as a Layer 2 interface or a routed (Layer 3) interface. A Layer 3 interface is a physical port that can route/forward Layer 3 (IP) traffic to another Layer 3 device. A routed (Layer 3) interface does not support and participate in Layer 2 protocols, such as the Rapid Spanning Tree Protocol (RSTP). When a port is configured to act as a routed interface, its protocol functions are no different than configuring a 100 Mb/s or Gigabit Ethernet port on a router. An IP address can be assigned to this interface, as well as ACLs- and QoS-related configurations can be applied. The network manager can assign an IP address to the Layer 3 interface, enable routing, and assign routing protocol characteristics to this Layer 3 interface.

VLAN interfaces (or Switched Virtual Interfaces (SVIs)) can also be configured on the Catalyst 3550. An SVI is a virtual (logical) Layer 3 interface that connects the routing (Layer 3) engine on a device to a VLAN configured on the same device. Only one SVI can be associated with a VLAN on the device. An SVI is configured for a VLAN only when there is the need to route between VLANs.

An SVI can also be used to connect the device to another external IP device through a virtual routing and forwarding (VRF) instance that is not configured as a management VRF. A device can route across its SVIs to provide Layer 3 inter-VLAN communications/routing. This requires configuring an SVI for each VLAN that traffic is to be routed to and assigning an IP address to the SVI.

To summarize, the interfaces supported on the catalysts 3550 are as follows:

  • Switch Ports: These are Layer 2-only interfaces on the switch with one interface per physical port:
    • - Access Ports: Traffic received and transmitted over these ports must be in native format (i.e., VLAN-tagged traffic is dropped).
    • - Trunk Ports: These ports carry traffic from multiple VLANs:
    • - ISL-Trunks: Packets over these trunks must be encapsulated with an ISL header.
    • - IEEE 802.1Q-Trunks: VLAN-tagged packets are trunked over these trunks but untagged packets are sent to native VLAN (or user-defined default VLAN).
  • Layer 3 (Routed) Ports: These ports are configured to behave like traditional router ports.
  • VLAN Interface (or Switch Virtual Interface (SVI)): This interface provides a connection between a Layer 3 routing process and an attached switched VLAN (a Layer 2 bridged access to a VLAN).

The Catalyst 3550 supports a number of security features to prevent unauthorized access to it. Access can be via issuing passwords on a console and VTY lines, username/password pairs stored locally on the Catalyst 3550 for individual access, and username/password pairs stored on a centrally located TACACS+ or RADIUS server. A virtual teletype (VTY) is a CLI implemented in a device that facilitates accessing it via Telnet, for example.

Privilege levels can also be configured for passwords where a user can be granted access at a predefined privilege level when the user enters the correct password. To support the ability to give different levels of configuration capabilities to different network managers, the Catalyst 3500 has 15 levels of authorization on the switch/router console and 2 levels on a Web-based management interface.

The Catalyst 3550 support a wide range of security features (e.g., Secure Shell (SSH), Simple Network Management Protocol version 3 (SNMPv3), Kerberos) that protect administrative and network management traffic from tampering or eavesdropping. The switch/router supports features and protocols that can encrypt administrative and network management information to allow secure communications with users and other devices.

  • Secure Shell (SSH): SSH encrypts administration traffic during Telnet sessions while the network administrator configures or troubleshoots the switch.
  • SNMPv3 (with Crypto Support): Provides network security by encrypting network administrator traffic during SNMP sessions to configure and troubleshoot the switch.
  • Kerberos: Provides strong authentication for users and network services using a trusted third party to perform secure verification.

For secure, remote connection to the Catalyst 3550, SSH can be used. User authentication methods can be via TACACS+, RADIUS, and local username authentication. A RADIUS client runs on the Catalyst 3550 and sends authentication requests to a central RADIUS server, which contains all information on user authentication and network service access. TACACS+ provides centralized validation of users seeking to gain access to the Catalyst 3550. The TACACS+ services are maintained in a database on a TACACS+ server running on a workstation.

The TACACS+ server has to be configured before the configuring TACACS+ features on the Catalyst 3550. TACACS+ is modular and provides authentication, authorization, and accounting services separately. With TACACS+, the TACACS+ server (implemented in a single access control device) is able to provide each authentication, authorization, and accounting service, separately and independently.

The Catalyst 3550 supports IEEE 802.1X that defines a client-server access control and authentication protocol that restricts unauthorized clients from accessing a network through the Catalyst 3550. The authentication server performs the authentication of each client connected to a Catalyst 3550 port before permitting access to any services offered by the switch or the network. A user can be authenticated using IEEE 802.1X based on a username and password (or other credentials supplied by the user) through a RADIUS server.

8.6 Catalyst 3550 Extended Features

The Catalyst 3550 supports a number of extended and advanced features beyond Layer 2 and 3 packet forwarding, some of which are discussed here.

8.6.1 EtherChannel and Link Aggregation

The Catalyst 3550 also supports 100 Mb/s Ethernet and Gigabit EtherChannel, which is a port link aggregation technology developed by Cisco. EtherChannel (similar to IEEE 802.3ad Link Aggregation) allows several physical Ethernet links to be grouped to create one logical Ethernet link. This provides high-speed links with fault-tolerance between switches, switch/router, routers, and servers in a network.

EtherChannel technology allows a network manager to aggregate multiple 100 Mb/s Ethernet or Gigabit Ethernet links to create a higher bandwidth connection (with scalable bandwidth and higher availability) between switches, servers, switch/routers, and routers than a single 100 Mb/s or Gigabit Ethernet links can provide. In the Catalyst 3550 with EtherChannel technology, all incoming packets (to the network satellites) are stored in the shared data buffer in the order in which they arrive, but are properly resequenced by the network satellites when forwarded. In the Catalyst 3550, EtherChannel provides full-duplex bandwidth up to 800 Mb/s (100 Mb/s EtherChannel) or 8 Gb/s (Gigabit EtherChannel) between the Catalyst 3550 and another device.

8.6.2 Port Security

The port security feature can be used to restrict access to a port by identifying and limiting MAC addresses (end stations) that are authorized/unauthorized to access the port. When “secure” or “trusted” MAC addresses are assigned to a port, the port only forwards packets with source addresses from that group of specified MAC addresses. If the number of trusted MAC addresses is limited to one and only a single trusted MAC address is assigned to a port, then the owner of that address is ensured the full bandwidth of the port.

If a port is configured as a trusted port and the maximum number of trusted MAC addresses is assigned, a security violation occurs when other MAC address (different from any of the specified trusted MAC addresses) attempt to access the port. After the maximum number of trusted MAC addresses have been set on a port, the trusted addresses are added to the address table manually (statically) by the network manager or the port dynamically configures trusted MAC addresses with the MAC addresses of connected stations. However, if the port shuts down, all dynamically learned trusted MAC addresses are lost/deleted. After the maximum number of trusted MAC addresses has been configured (manually or dynamically), they are stored in an address table.

A port with port security configured with “sticky” trusted MAC addresses provides the same benefits as port security configured manually (i.e., with static MAC addresses), but with the exception that sticky MAC addresses can be dynamically learned. These sticky MAC addresses are stored in the address table, and added to the switch's running configuration.

The sticky trusted MAC addresses (even if added to the running configuration) do not automatically become part of the switch's start-up configuration file (each time the switch restarts or reboots). If the sticky trusted MAC addresses are saved in the start-up configuration file, then when the switch restarts/reboots, the port does not need to relearn these addresses. The switch retains these dynamically learned MAC addresses during a link-down condition. If the start-up configuration is not saved, they are lost when the system restarts/reboots.

As noted above, a security violation occurs if a station with a MAC address not in the address table attempts to access the port. A port can be configured to take the following actions if a violation occurs. A port security violation can cause an SNMP notification to be generated and sent to a management station or it can cause the port to shut down immediately.

To manage and control unauthorized access to the Catalyst 3550 switch/router (port security), a network manager can configure up to 132 “trusted” MAC addresses per port. When port security is configured on the switch/router, the network satellite that handles a port applies the filtering policies as part of the normal address learning and filtering process. Switch/router ports know about trusted MAC addresses either through manual configuration by a network manager or automatically through the connected end stations.

When automatic configuration is used, the network manager waits until a port goes through “learning” the MAC addresses of the connected devices, and then after a period of time “freezes” the trusted address table. Only packets from these “trusted” MAC addresses (maintained in the address table) are granted access through the port. If a port security violation occurs, the port can block access or ignore the violation and send an SNMP trap to a management station. Port security can be configured using the Web-based CVSM interface on the Catalyst 3550.

8.6.3 Switch Clustering

Switch clustering can be used to simplify the management of multiple switches in a network, regardless of device/platform family and their physical geographic proximity. Through the use of standby cluster command switches, a network manager can also use clustering to provide switching redundancy in the network. A switch cluster can consist of up to 16 cluster-capable switches that are managed as a single logical switching entity. The cluster switches can support a switch clustering technology that allows configuring and troubleshooting them as a group through a single IP address. The external network communicates with the switch cluster through the single IP address of the cluster command switch.

In the switch cluster (a cluster cannot exceed 16 switches), one switch is designated the cluster command switch, and one or more other switches (up to 15) can be designated as cluster member switches. The role of the cluster command switch is to serve as the single point of access for configuring, managing, and monitoring the cluster member switches. However, cluster member switches can belong to only one switch cluster at a time since they cannot be configured, managed, and monitored by more than one cluster command switch at the same time.

More than one cluster member switch can be designated as standby cluster command switch to implement command switch redundancy if the active cluster command switch fails. This is to avoid loss of communication with cluster member switches when the active command switch fails. A cluster standby group can also be configured for a switch cluster that consists of a group of standby cluster command switches.

The Catalyst 3550 switch/router supports the Cisco switch clustering technology that enable up to 16 switches to be managed (logically as one unit) through a single IP address (independent of the media interconnecting them or their geographic proximity).

8.6.4 Channel Multiplexing and Frame Stitching

The Catalyst 3550 supports other advanced packet switching and forwarding features such as channel multiplexing and frame (packet) “stitching.” Channel multiplexing is the Catalyst 3550's ability to support multiple “threads” (up to 256 threads) per radial channel. Each cell of an arriving packet (frame) contains a thread identifier that is used to multiplex and demultiplex data transferred over the radial channel. This capability may be applied to network interface modules that support multiple 100BASE-T ports associated with a single radial channel.

The Catalyst 3550 uses frame stitching to modify packets “on the fly” without degrading overall packet forwarding performance. With this, a network satellite would be able to read part of a packet (usually the cell containing the packet header), process and modify it, and write back the modified header in the shared data buffer. The satellite would then edit the contents of the buffer table to “stitch” the new “header” cell into the old packet, effectively overwriting the original first “header” cell.

This allows packets with modified headers to be created without the need to retrieve, modify, and write back entire packets. After frame stitching occurs, the source network satellite transmits a frame-notify message via the notify ring to the destination satellite to read and forward the modified packets.

Rewriting packet headers is an essential component of Layer 3 forwarding; so this feature prepares the packet for Layer 3 forwarding. Another application is IP multicasting where creating multiple versions of the first cell (containing the packet header) in a stream enables transmission to several multicast member ports with minimal processing overhead.

8.6.5 Switched Port Analyzer

There are times when a network manager would need to gather data passing through a switch port to and from a specific network segment or end station. The Switched Port Analyzer (SPAN) feature support in the Catalyst 3550 allows a network manager to designate a particular port (destination port) on the switch to “mirror” activity through specific ports of interest in the system. External sniffers or probes (such as a Cisco SwitchProbe) can be attached to the destination port to gather data passing through the other (source) ports of interest. Remote SPAN (RSPAN) extends SPAN to allow remote monitoring of multiple switches across a network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset