Chapter 2. Switching Design

This first chapter in Part II, “Technologies: What You Need to Know and Why You Need to Know It,” discusses switching network design. After introducing why switches are an important part of a network, we examine the different types of switching and then discuss the Spanning Tree Protocol (STP), which is key in Layer 2 switched environments to ensure that redundancy does not cause the network performance to deteriorate. Virtual local-area networks (VLANs) are then described. Two types of Layer 3 switching, multilayer switching (MLS) and Cisco Express Forwarding (CEF), are then introduced. Security in a switched environment is examined next. The chapter concludes with considerations and examples of switched designs.

Note

Appendix B, “Network Fundamentals,” includes material that we assume you understand before reading the rest of the book. Thus, you are encouraged to review any of the material in Appendix B that you are not familiar with before reading the rest of this chapter.

Making the Business Case

Switches can enhance the performance, flexibility, and functionality of your network.

The first networks were LANs; they enabled multiple users in a relatively small geographical area to exchange files and messages, and to access shared resources such as printers and disk storage. A hub—an Open Systems Interconnection (OSI) Layer 1 device—interconnected PCs, servers, and so forth as the number of devices on the network grew. However, because all devices connected to a hub are in the same bandwidth (or collision) domain—they all share the same bandwidth—using hubs in anything but a small network is not efficient.

To improve performance, LANs can be divided into multiple smaller LANs, interconnected by a Layer 2 LAN switch. Because each port of the switch is its own collision domain, multiple simultaneous conversations between devices connected through the switch can occur.

By default, all ports of a switch are in the same broadcast domain. Recall (from Appendix B) that a broadcast domain includes all devices that receive each other’s broadcasts (and multicasts). A broadcast is data meant for all devices; it uses a special broadcast address to indicate this. A multicast is data destined for a specific group; again, a special address indicates this. Note that Layer 3 broadcast packets are typically encapsulated in Layer 2 broadcast frames, and Layer 3 multicast packets are typically encapsulated in Layer 2 multicast frames (assuming that the packets are going over a data-link technology that supports these types of frames, such as Ethernet).

The implications of this for modern networks are significant—a large switched OSI Layer 2 network is one broadcast domain, so any broadcasts or multicasts traverse the entire network. Examples of broadcast traffic include Internet Protocol (IP) Address Resolution Protocol (ARP) packets, and routing protocol traffic such as Routing Information Protocol (RIP) version 1 (RIPv1). Multicast traffic includes packets from more advanced routing protocols such as Open Shortest Path First (OSPF) and applications such as e-learning and videoconferencing. As network use increases, the amount of traffic—including multicast and broadcast traffic—will also increase.

Today’s switches support VLANs so that physically remote devices can appear to be on the same (virtual) LAN. Each VLAN is its own broadcast domain. Traffic within a VLAN can be handled by Layer 2 switches. However, traffic between VLANS, just like traffic between LANs, must be handled by an OSI Layer 3 device. Traditionally, routers have been the Layer 3 device of choice. Today, Layer 3 switches offer the same functionality as routers but at higher speeds and with additional functionality.

The rest of this chapter explains how switches—Layer 2 and Layer 3—and the protocols associated with them work, and how they can be incorporated into network designs.

Switching Types

Switches were initially introduced to provide higher-performance connectivity than hubs, because switches define multiple collision domains.

Switches have always been able to process data at a faster rate than routers, because the switching functionality is implemented in hardware—in Application-Specific Integrated Circuits (ASICs)—rather than in software, which is how routing has traditionally been implemented. However, switching was initially restricted to the examination of Layer 2 frames. With the advent of more powerful ASICs, switches can now process Layer 3 packets, and even the contents of those packets, at high speeds.

The following sections first examine the operation of traditional Layer 2 switching. Layer 3 switching—which is really routing in hardware—is then explored.

Layer 2 Switching

Key Point

Layer 2 switches segment a network into multiple collision domains and interconnect devices within a workgroup, such as a group of PCs.

The heart of a Layer 2 switch is its Media Access Control (MAC) address table, also known as its content-addressable memory (CAM). This table contains a list of the MAC addresses that are reachable through each switch port. (Recall that the physical MAC address uniquely identifies a device on a network. When a network interface card is manufactured, the card is assigned an address—called a burned-in address [BIA]—which doesn’t change when the network card is installed in a device and is moved from one network to another. Typically, this BIA is copied to interface memory and is used as the MAC address of the interface.) The MAC address table can be statically configured, or the switch can learn the MAC addresses dynamically. When a switch is first powered up, its MAC address table is empty, as shown in the example network of Figure 2-1.

The MAC Address Table Is Initially Empty

Figure 2-1. The MAC Address Table Is Initially Empty

In this example network, consider what happens when device A sends a frame destined for device D. The switch receives the frame on port 1 (from device A). Recall that a frame includes the MAC address of the source device and the MAC address of the destination device. Because the switch does not yet know where device D is, the switch must flood the frame out of all the other ports; therefore, the switch sends the frame out of ports 2, 3, and 4. This means that devices B, C, and D all receive the frame. Only device D, however, recognizes its MAC address as the destination address in the frame; it is the only device on which the CPU is interrupted to further process the frame.

In the meantime, the switch now knows that device A can be reached on port 1 (because the switch received a frame from device A on port 1); the switch therefore puts the MAC address of device A in its MAC address table for port 1. This process is called learning—the switch is learning all the MAC addresses that it can reach.

At some point, device D is likely to reply to device A. At that time, the switch receives a frame from device D on port 4; the switch records this information in its MAC address table as part of its learning process. This time, the switch knows where the destination, device A, is; the switch therefore forwards the frame only out of port 1. This process is called filtering—the switch is sending the frames only out of the port through which they need to go—when the switch knows which port that is—rather than flooding them out of all the ports. This reduces the traffic on the other ports and reduces the interruptions that the other devices experience.

Over time, the switch learns where all the devices are, and the MAC address table is fully populated, as shown in Figure 2-2.

The Switch Learns Where All the Devices Are and Populates Its MAC Address Table

Figure 2-2. The Switch Learns Where All the Devices Are and Populates Its MAC Address Table

The filtering process also means that multiple simultaneous conversations can occur between different devices. For example, if device A and device B want to communicate, the switch sends their data between ports 1 and 2; no traffic goes on ports 3 or 4. At the same time, devices C and D can communicate on ports 3 and 4 without interfering with the traffic on ports 1 and 2. Thus, the overall throughput of the network has increased dramatically.

The MAC address table is kept in the switch’s memory and has a finite size (depending on the specific switch used). If many devices are attached to the switch, the switch might not have room for an entry for every one, so the table entries will time out after a period of not being used. For example, the Cisco Catalyst 2950 switch defaults to a 300-second timeout. Thus, the most active devices are always in the table.

Note

Cisco LAN switches are also known as Catalyst switches.

Key Point

Broadcast and multicast frames are, by default, flooded to all ports of a Layer 2 switch, other than the incoming port. The same is true for unicast frames that are destined to any device that is not in the MAC address table.

MAC addresses can also be statically configured in the MAC address table, and you can specify a maximum number of addresses allowed per port.

One advantage of static addresses is that less flooding occurs, both when the switch first comes up and because of not aging out the addresses. However, this also means that if a device is moved, the switch configuration must be changed. A related feature available in some switches is the ability to sticky-learn addresses—the address is dynamically learned, as described earlier, but is then automatically entered as a static command in the switch configuration. Limiting the number of addresses per port to one and statically configuring those addresses can ensure that only specific devices are permitted access to the network; this feature is particularly useful when addresses are sticky-learned.

Layer 3 Switching

Key Point

A Layer 3 switch is really a router with some of the functions implemented in hardware to improve performance. In other words, some of the OSI model network layer routing functions are performed in high-performance ASICs rather than in software.

In Appendix B and Chapter 3, “IPv4 Routing Design,” we describe the following various functions and characteristics of routers:

  • Learning routes and keeping the best path to each destination in a routing table.

  • Determining the best path that each packet should take to get to its destination, by comparing the destination address to the routing table.

  • Sending the packet out of the appropriate interface, along the best path. This is also called switching the packet, because the packet is encapsulated in a new frame, with the appropriate framing header information, including MAC addresses.

  • Communicating with other routers to exchange routing information.

  • Allowing devices on different LANs to communicate with each other and with distant devices.

  • Blocking broadcasts. By default, a router does not forward broadcasts, thereby helping to control the amount of traffic on the network.

These tasks can be CPU intensive. Offloading the switching of the packet to hardware can result in a significant increase in performance.

A Layer 3 switch performs all the previously mentioned router functions; the differences are in the physical implementation of the device rather than in the functions it performs. Thus, functionally, the terms router and Layer 3 switch are synonymous.

Layer 4 switching is an extension of Layer 3 switching that includes examination of the contents of the Layer 3 packet. For example, as described in Appendix B, the protocol number in the IP packet header indicates which transport layer protocol (for example, Transmission Control Protocol [TCP] or User Datagram Protocol [UDP]) is being used, and the port number in the TCP or UDP segment indicates the application being used. Switching based on the protocol and port numbers can ensure, for example, that certain types of traffic get higher priority on the network or take a specific path.

Depending on the switch, Layer 3 switching can be implemented in two different ways within Cisco switches—through multilayer switching and Cisco Express Forwarding. These terms are described in the section “Multilayer Switching and Cisco Express Forwarding,” later in this chapter (after we discuss VLANs, which you must understand before you read that section).

Spanning Tree Protocol

Key Point

STP is a Layer 2 protocol that prevents logical loops in switched networks that have redundant links.

In the following sections, we first examine why such a protocol is needed in Layer 2 networks. We then introduce STP terminology and operation.

Note

In the following sections, we are only concerned with Layer 2 switching; as you see in Chapter 3, routed (Layer 3) networks inherently support networks with multiple paths, so a protocol such as STP is not required.

Redundancy in Layer 2 Switched Networks

Redundancy in a network, such as that shown in Figure 2-3, is desirable so that communication can still take place if a link or device fails. For example, if switch X in this figure stopped functioning, devices A and B could still communicate through switch Y. However, in a switched network, redundancy can cause problems.

Redundancy in a Switched Network Can Cause Problems

Figure 2-3. Redundancy in a Switched Network Can Cause Problems

The first type of problem occurs if a broadcast frame is sent on the network. (Recall that a switch floods broadcast frames to all ports other than the one that it came in on.) For example, consider what happens when device A in Figure 2-3 sends an ARP request to find the MAC address of device B. The ARP request is sent as a broadcast. Both switch X and switch Y receive the broadcast; for now, consider just the one received by switch X, on its port 1. Switch X floods the broadcast to all its other connected ports; in this case, it floods it to port 2. Device B can see the broadcast, but so can switch Y, on its port 2; switch Y floods the broadcast to its port 1. This broadcast is received by switch X on its port 1; switch X floods it to its port 2, and so forth. The broadcast continues to loop around the network, consuming bandwidth and processing power. This situation is called a broadcast storm.

The second problem that can occur in redundant topologies is that devices can receive multiple copies of the same frame. For example, assume that neither of the switches in Figure 2-3 has learned where device B is located. When device A sends data destined for device B, switch X and switch Y both flood the data to the lower LAN, and device B receives two copies of the same frame. This might be a problem for device B, depending on what it is and how it is programmed to handle such a situation.

The third difficulty that can occur in a redundant situation is within the switch itself—the MAC address table can change rapidly and contain wrong information. Again referring to Figure 2-3, consider what happens when neither switch has learned where device A or B are located, and device A sends data to device B. Each switch learns that device A is on its port 1, and each records this in its MAC address table. Because the switches don’t yet know where device B is, they flood the frame, in this case on their port 2. Each switch then receives the frame, from the other switch, on its port 2. This frame has device A’s MAC address in the source address field; therefore, both switches now learn that device A is on their port 2. The MAC address table is therefore overwritten. Not only does the MAC address table have incorrect information (device A is actually connected to port 1, not port 2, of both switches), but because the table changes rapidly, it might be considered to be unstable.

To overcome these problems, you need a way to logically disable part of the redundant network for regular traffic while still maintaining the redundancy for the case when an error occurs. The Spanning Tree Protocol does just that.

STP Terminology and Operation

The following sections introduce the Institute of Electrical and Electronics Engineers (IEEE) 802.1d STP terminology and operation.

STP Terminology

STP terminology can best be explained by examining how an example network, such as the one in Figure 2-4, operates.

STP Chooses the Port to Block

Figure 2-4. STP Chooses the Port to Block

Note

Notice that STP terminology refers to the devices as bridges rather than switches. Recall (from Appendix B) that bridges are previous-generation devices with the same logical functionality as switches; however, switches are significantly faster because they switch in hardware, whereas bridges switch in software. Functionally, the two terms are synonymous.

Within an STP network, one switch is elected as the root bridge—it is at the root of the spanning tree. All other switches calculate their best path to the root bridge. Their alternate paths are put in the blocking state. These alternate paths are logically disabled from the perspective of regular traffic, but the switches still communicate with each other on these paths so that the alternate paths can be unblocked in case an error occurs on the best path.

All switches running STP (it is turned on by default in Cisco switches) send out bridge protocol data units (BPDUs). Switches running STP use BPDUs to exchange information with neighboring switches. One of the fields in the BPDU is the bridge identifier (ID); it is comprised of a 2-octet bridge priority and a 6-octet MAC address. STP uses the bridge ID to elect the root bridge—the switch with the lowest bridge ID is the root bridge. If all bridge priorities are left at their default values, the switch with the lowest MAC address therefore becomes the root bridge. In Figure 2-4, switch Y is elected as the root bridge.

Note

The way that STP chooses the root bridge can cause an interesting situation if left to the default values. Recall that the MAC address is a 6-octet or 48-bit value, with the upper 24 bits as an Organizational Unique Identifier (OUI) (representing the vendor of the device) and the lower 24 bits as a unique value for that OUI, typically the serial number of the device. A lower MAC address means a lower serial number, which likely means an older switch. Thus, because STP by default chooses a switch with a lower MAC address, the oldest switch is likely to be chosen. This is just one reason why you should explicitly choose the root bridge (by changing the priority), rather than getting the STP default choice.

All the ports on the root bridge are called designated ports, and they are all in the forwarding state—that is, they can send and receive data. (The STP states are described in the next section of this chapter.)

On all nonroot bridges, one port becomes the root port, and it is also in the forwarding state. The root port is the one with the lowest cost to the root. The cost of each link is by default inversely proportional to the bandwidth of the link, so the port with the fastest total path from the switch to the root bridge is selected as the root port on that switch. In Figure 2-4, port 1 on switch X is the root port for that switch because it is the fastest way to the root bridge.

Note

If multiple ports on a switch have the same fastest total path costs to the root bridge, STP considers other BPDU fields. STP looks first at the bridge IDs in the received BPDUs (the bridge IDs of the next switch in the path to the root bridge); the port that received the BPDU with the lowest bridge ID becomes the root port. If these bridge IDs are also equal, the port ID breaks the tie; the port with the lower port ID becomes the root port. The port ID field includes a port priority and a port index, which is the port number. Thus, if the port priorities are the same (for example, if they are left at their default value), the lower port number becomes the root port.

Each LAN segment must have one designated port. It is on the switch that has the lowest cost to the root bridge (or if the costs are equal, the port on the switch with the lowest bridge ID is chosen), and it is in the forwarding state. In Figure 2-4, the root bridge has designated ports on both segments, so no more are required.

Note

The root bridge sends configuration BPDUs on all its ports periodically, every 2 seconds by default. (These configuration BPDUs include the STP timers, therefore ensuring that all switches in the network use the same timers.) On each LAN segment the switch that has the designated port forwards the configuration BPDUs to the segment; all switches in the network therefore receive these BPDUs, on their root port.

All ports on a LAN segment that are not root ports or designated ports are called nondesignated ports and transition to the blocking state—they do not send data, so the redundant topology is logically disabled. In Figure 2-4, port 2 on switch X is the nondesignated port, and it is in the blocking state. Blocking ports do, however, listen for BPDUs.

If a failure happens—for example, if a designated port or a root bridge fails—the switches send topology change BPDUs and recalculate the spanning tree. The new spanning tree does not include the failed port or switch, and the ports that were previously blocking might now be in the forwarding state. This is how STP supports the redundancy in a switched network.

STP States

Figure 2-5 illustrates the various STP port states.

A Port Can Transition Among STP States

Figure 2-5. A Port Can Transition Among STP States

When a port initially comes up, it is put in the blocking state, in which it listens for BPDUs and then transitions to the listening state. A blocking port in an operational network can also transition to the listening state if it does not hear any BPDUs for the max-age time (a default of 20 seconds). While in the listening state, the switch can send and receive BPDUs but not data. The root bridge and the various final states of all the ports are determined in this state. If the port is chosen as the root port on a switch or as a designated port on a segment, the port transitions to the learning state after the listening state. In the learning state, the port still cannot send data, but it can start to populate its MAC address table if any data is received. The length of time spent in each of the listening and learning states is dictated by the value of the forward-delay parameter, which is 15 seconds by default. After the learning state, the port transitions to the forwarding state, in which it can operate normally. Alternatively, if in the listening state the port is not chosen as a root port or designated port, it becomes a nondesignated port and it transitions back to the blocking state.

Key Point

Do not confuse the STP learning state with the learning process that the switch goes through to populate its MAC address table. The STP learning state is a transitory state. While a switch can learn MAC addresses from data frames received on its ports that are in the STP learning state, it does not forward those frames. In a stable network, switch ports are in either the forwarding or blocking state. Ports in the blocking state do not listen to data frames and therefore do not contribute to the switch’s MAC address table. Ports in the forwarding state do, of course, listen to (and forward) data frames, and those frames populate the switch’s MAC address table.

STP Options

Figure 2-5 illustrates that it could take up to 50 seconds for a blocked port to transition to the forwarding state after a failure has occurred in the forwarding path. This lengthy time is one of the drawbacks of STP.

Several features and enhancements to STP can help to reduce the convergence time, that is, the time it takes for all the switches in a network to agree on the network’s topology after that topology has changed. The following are some of these features that are implemented in Cisco switches:

  • PortFast—This feature should be used for ports that have only end-user stations or servers attached to them, in other words, for ports that are not attached to other switches (so that no BPDUs are received on the port). Because no other switches are attached, the port cannot be part of a loop, so the switch immediately puts the port in the forwarding state. Thus, the port transitions to the forwarding state much faster than it otherwise would.

  • UplinkFast—This feature is intended to be used on redundant ports on access layer switches.[1] If the root port (pointing to the root bridge) on a switch goes down, the nondesignated port (the redundant blocking port) on the switch is quickly put in the forwarding state, rather than going through all the other states.

  • BackboneFast—This feature helps to reduce the convergence time when links other than those directly connected to a switch fail. This feature must be deployed on all switches in the network if it is to be used.

Rapid STP (RSTP)

RSTP is defined by IEEE 802.1w. RSTP incorporates many of the Cisco enhancements to STP, resulting in faster convergence. Switches in an RSTP environment converge quickly by communicating with each other and determining which links can be forwarding, rather than just waiting for the timers to transition the ports among the various states. RSTP ports take on different roles than STP ports. The RSTP roles are root, designated, alternate, backup, and disabled. RSTP port states are also different than STP port states. The RSTP states are discarding, learning, and forwarding. RSTP is compatible with STP.

Virtual LANs

As noted earlier, a broadcast domain includes all devices that receive each other’s broadcasts (and multicasts). All the devices connected to one router port are in the same broadcast domain. Routers block broadcasts (destined for all networks) and multicasts by default; routers only forward unicast packets (destined for a specific device) and packets of a special type called directed broadcasts. Typically, you think of a broadcast domain as being a physical wire, a LAN. But a broadcast domain can also be a VLAN, a logical construct that can include multiple physical LAN segments.

Note

IP multicast technology, which enables multicast packets to be sent throughout a network, is described in Chapter 10, “Other Enabling Technologies.”

Note

An IP directed broadcast is a packet destined for all devices on an IP subnet, but which originates from a device on another subnet. A router that is not directly connected to the destination subnet forwards the IP directed broadcast in the same way it would forward unicast IP packets destined to a host on that subnet.

On Cisco routers, the ip directed-broadcast interface command controls what the last router in the path, the one connected to the destination subnet, does with the packet. If ip directed-broadcast is enabled on the interface, the router changes the directed broadcast to a broadcast and sends the packet, encapsulated in a Layer 2 broadcast frame, onto the subnet. However, if the no ip directed-broadcast command is configured on the interface, directed broadcasts destined for the subnet to which that interface is attached are dropped. In Cisco Internet Operating System (IOS) version 12.0, the default for this command was changed to no ip directed-broadcast.

Key Point

We found the Cisco definition of VLANs to be very clear: “[A] group of devices on one or more LANs that are configured (using management software) so that they can communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. Because VLANs are based on logical instead of physical connections, they are extremely flexible.”[2]

Figure 2-6 illustrates the VLAN concept. On the left side of the figure, three individual physical LANs are shown, one each for Engineering, Accounting, and Marketing. (These LANs contain workstations—E1, E2, A1, A2, M1, and M2—and servers—ES, AS, and MS.) Instead of physical LANs, an enterprise can use VLANs, as shown on the right side of the figure. With VLANs, members of each department can be physically located anywhere, yet still be logically connected with their own workgroup. Thus, in the VLAN configuration, all the devices attached to VLAN E (Engineering) share the same broadcast domain, the devices attached to VLAN A (Accounting) share a separate broadcast domain, and the devices attached to VLAN M (Marketing) share a third broadcast domain. Figure 2-6 also illustrates how VLANs can span across multiple switches; the link between the two switches in the figure carries traffic from all three of the VLANs and is called a trunk.

A VLAN Is a Logical Implementation of a Physical LAN

Figure 2-6. A VLAN Is a Logical Implementation of a Physical LAN

VLAN Membership

Key Point

A switch port that is not a trunk can belong to only one VLAN at a time. You can configure which VLAN a port belongs to in two ways: statically and dynamically.

Static port membership means that the network administrator configures which VLAN the port belongs to, regardless of the devices attached to it. This means that after you have configured the ports, you must ensure that the devices attaching to the switch are plugged into the correct port, and if they move, you must reconfigure the switch.

Alternatively, you can configure dynamic VLAN membership. Some static configuration is still required, but this time, it is on a separate device called a VLAN Membership Policy Server (VMPS). The VMPS could be a separate server, or it could be a higher-end switch that contains the VMPS information. VMPS information consists of a MAC address-to-VLAN map. Thus, ports are assigned to VLANs based on the MAC address of the device connected to the port. When you move a device from one port to another port (either on the same switch or on another switch in the network), the switch dynamically assigns the new port to the proper VLAN for that device by consulting the VMPS.

Trunks

As mentioned earlier, a port that carries data from multiple VLANs is called a trunk. A trunk port can be on a switch, a router, or a server.

A trunk port can use one of two protocols: Inter-Switch Link (ISL) or IEEE 802.1q.

ISL is a Cisco-proprietary trunking protocol that involves encapsulating the data frame between an ISL header and trailer. The header is 26 bytes long; the trailer is a 4-byte cyclic redundancy check (CRC) that is added after the data frame. A 15-bit VLAN ID field is included in the header to identify the VLAN that the traffic is for. (Only the lower 10 bits of this field are used, thus supporting 1024 VLANs.)

The 802.1q protocol is an IEEE standard protocol in which the trunking information is encoded within a Tag field that is inserted inside the frame header itself. Trunks using the 802.1q protocol define a native VLAN. Traffic for the native VLAN is not tagged; it is carried across the trunk unchanged. Thus, end-user stations that don’t understand trunking can communicate with other devices directly over an 802.1q trunk, as long as they are on the native VLAN. The native VLAN must be defined to be the same VLAN on both sides of the trunk. Within the Tag field, the 802.1q VLAN ID field is 12 bits long, allowing up to 4096 VLANs to be defined. The Tag field also includes a 3-bit 802.1p user priority field; these bits are used as class of service (CoS) bits for quality of service (QoS) marking. (Chapter 6, “Quality of Service Design,” describes QoS marking.)

The two types of trunks are not compatible with each other, so both ends of a trunk must be defined with the same trunk type.

Note

Multiple switch ports can be logically combined so that they appear as one higher-performance port. Cisco does this with its Etherchannel technology, combining multiple Fast Ethernet or Gigabit Ethernet links. Trunks can be implemented on both individual ports and on these Etherchannel ports.

STP and VLANs

Cisco developed per-VLAN spanning tree (PVST) so that switches can have one instance of STP running per VLAN, allowing redundant physical links within the network to be used for different VLANs and thus reducing the load on individual links. PVST is illustrated in Figure 2-7.

PVST Allows Redundant Physical Links to Be Used for Different VLANs

Figure 2-7. PVST Allows Redundant Physical Links to Be Used for Different VLANs

The top diagram in Figure 2-7 shows the physical topology of the network, with switches X and Y redundantly connected. In the lower-left diagram, switch Y has been selected as the root bridge for VLAN A, leaving port 2 on switch X in the blocking state. In contrast, the lower-right diagram shows that switch X has been selected as the root bridge for VLAN B, leaving port 2 on switch Y in the blocking state. With this configuration, traffic is shared across all links, with traffic for VLAN A traveling to the lower LAN on switch Y’s port 2, while traffic for VLAN B traveling to the lower LAN goes out switch X’s port 2.

PVST only works over ISL trunks. However, Cisco extended this functionality for 802.1q trunks with the PVST+ protocol. Before this became available, 802.1q trunks only supported Common Spanning Tree (CST), with one instance of STP running for all VLANs.

Multiple-Instance STP (MISTP) is an IEEE standard (802.1s) that uses RSTP and allows several VLANs to be grouped into a single spanning-tree instance. Each instance is independent of the other instances so that a link can be forwarding for one group of VLANs while blocking for other VLANs. MISTP therefore allows traffic to be shared across all the links in the network, but it reduces the number of STP instances that would be required if PVST/PVST+ were implemented.

VLAN Trunking Protocol

Key Point

The VLAN Trunking Protocol (VTP) is a Cisco-proprietary Layer 2 protocol that allows easier configuration of VLANs on multiple switches. When VTP is enabled in your network, you define all the VLANs on one switch, and that switch sends the VLAN definitions to all the other switches. On those other switches, you then have to only assign the ports to the VLANs; you do not have to configure the VLANs themselves. Not only is configuration easier, but it is also less prone to misconfiguration errors.

A switch in a VTP domain (a group of switches communicating with VTP) can be in one of three modes: server (which is the default mode), client, or transparent mode. The VTP server is the one on which you configure the VLANs; it sends VTP advertisements, containing VLAN configuration information, to VTP clients in the same VTP domain, as illustrated in Figure 2-8. Note that VTP advertisements are only sent on trunks.

VTP Eases VLAN Definition Configuration

Figure 2-8. VTP Eases VLAN Definition Configuration

You cannot create, modify, or delete VLANs on a VTP client; rather, a VTP client only accepts VLAN configuration information from a VTP server. A VTP client also forwards the VTP advertisements to other switches.

You can create, modify, or delete VLANs on a switch that is in VTP transparent mode; however, this information is not sent to other switches, and the transparent-mode switch ignores advertisements from VTP servers (but does pass them on to other switches).

VTP pruning is a VTP feature that helps reduce the amount of flooded traffic (including broadcast, multicast, and unicast) that is sent on the network. With VTP pruning enabled, the switches communicate with each other to find out which switches have ports in which VLANs; switches that have no ports in a particular VLAN (and have no downstream switches with ports in that VLAN) do not receive that VLAN’s traffic. For example, in Figure 2-8, switch 4 has no need for VLAN A traffic, so VTP pruning would prevent switch 1 from flooding VLAN A traffic to switch 4. VTP pruning is disabled by default.

Inter-VLAN Routing

You have learned how devices on one VLAN can communicate with each other using switches and trunks. But how do networked devices on different VLANs communicate with each other?

Key Point

Just like devices on different LANs, those on different VLANs require a Layer 3 mechanism (a router or a Layer 3 switch) to communicate with each other.

A Layer 3 device can be connected to a switched network in two ways: by using multiple physical interfaces or through a single interface configured as a trunk. These two connection methods are shown in Figure 2-9. The diagram on the left in this figure illustrates a router with three physical connections to the switch; each physical connection carries traffic from only one VLAN.

A Router, Using Either Multiple Physical Interfaces or a Trunk, Is Required for Communication Among VLANs

Figure 2-9. A Router, Using Either Multiple Physical Interfaces or a Trunk, Is Required for Communication Among VLANs

The diagram on the right in Figure 2-9 illustrates a router with one physical connection to the switch. The interfaces on the switch and the router have been configured as trunks; therefore, multiple logical connections exist between the two devices. When a router is connected to a switch through a trunk, it is sometimes called a “router on a stick,” because it has only one physical interface (a stick) to the switch.

Each interface between the switch and the Layer 3 device (whether physical interfaces or logical interfaces within a trunk) is in a separate VLAN (and therefore in a separate subnet for IP networks).

Multilayer Switching and Cisco Express Forwarding

Now that you have an understanding of VLANs, the following sections introduce the two different ways that Layer 3 switching is implemented within Cisco switches—multilayer switching and Cisco Express Forwarding.

Multilayer Switching

Multilayer switching, as its name implies, allows switching to take place at different protocol layers. Switching can be performed only on Layers 2 and 3, or it can also include Layer 4.

MLS is based on network flows.

Key Point

A network flow is a unidirectional sequence of packets between a source and a destination. Flows can be very specific. For example, a network flow can be identified by source and destination IP addresses, protocol numbers, and port numbers as well as the interface on which the packet enters the switch.

The three major components of MLS are as follows[3]:

  • MLS Route Processor (MLS-RP)—The MLS-enabled router that performs the traditional function of routing between subnets

  • MLS Switching Engine (MLS-SE)—The MLS-enabled switch that can offload some of the packet-switching functionality from the MLS-RP

  • Multilayer Switching Protocol (MLSP)—Used by the MLS-RP and the MLS-SE to communicate with each other

MLS can be implemented in the following two ways:

  • Within a Catalyst switch—Here both the MLS-RP and the MLS-SE are resident in the same chassis. An example of an internal MLS-RP is a Route Switch Module (RSM) installed in a slot of a Catalyst 5500 Series switch.

  • Using a combination of a Catalyst switch and an external router—An example of a router that can be an external MLS-RP router is a Cisco 3600 Series router with the appropriate IOS software release and with MLS enabled.

Note

Not all Catalyst switches and routers support MLS. Refer to specific product documentation on the Cisco website for device support information for switches[4] and routers.[5]

Key Point

MLS allows communication between two devices that are in different VLANs (on different subnets) and that are connected to the same MLS-SE and that share a common MLS-RP. The communication bypasses the MLS-RP and instead uses the MLS-SE to relay the packets, thus improving overall performance.[6]

Figure 2-10 is an example network that illustrates MLS operation.

The MLS-SE Offloads Work from the MLS-RP

Figure 2-10. The MLS-SE Offloads Work from the MLS-RP

In Figure 2-10, the MLS-RP and MLS-SE communicate using MLSP. The SE learns the MAC addresses of the RP (one for each VLAN that is running MLS). When device 1 (10.1.1.1/16) wants to send a packet to device 2 (10.2.2.2/16), device 1 creates a frame with the destination MAC address of its default gateway, the router, which in this case is the RP. The SE receives the frame, sees that it is for the RP, and therefore examines its MLS cache to see whether it has a match for this flow. In the case of the first packet in the flow, no match exists, so the SE forwards the frame to the RP. The SE also puts the frame in its MLS cache and marks the frame as a candidate entry.

The MLS-RP receives the frame, decapsulates (unwraps) the frame, and examines the packet. The RP then examines its routing table to see whether it has a route to the destination of the packet; assuming that it does, the RP creates a new frame for the packet after decrementing the IP header Time to Live (TTL) field and recalculating the IP header checksum. The source MAC address of this frame is the MAC address of the RP; the destination MAC address of this frame is the MAC address of the destination device (or next-hop router). The RP then sends the frame through the SE.

The MLS-SE receives the frame and compares it to its MLS cache; the SE recognizes that the frame is carrying the same packet as a candidate entry and is on its way back from the same RP. The SE therefore completes the MLS cache entry using information from the frame; this entry is now an enabler entry. The SE also forwards the frame out of the appropriate port toward its destination.

When a subsequent packet in the same flow enters the switch, the SE examines its MLS cache to see whether it has a match. This time it does have a match, so it does not forward the frame to the RP. Instead, the SE rewrites the frame using the information in the MLS cache, including decrementing the TTL field, recalculating the IP header checksum, and using the MAC address of the RP as the source MAC address; the resulting frame looks as though it came from the RP. The SE then forwards the frame out of the appropriate port toward its destination.

Note

Network flows are unidirectional. Therefore, if device 1 and device 2 both send packets to each other, two flows would be recorded in the MLS cache, one for each direction.

Note

In Figure 2-10, the MLS cache is shown as having a “protocol” field. In the output of the display on the Catalyst switches this field is called a “port” field, even though it represents the protocol field in the IP header.

The MLS-SE also keeps traffic statistics that can be exported to other utilities to be used, for example, for troubleshooting, accounting, or other functions.

Cisco Express Forwarding

Cisco Express Forwarding (CEF), like MLS, aims to speed the data routing and forwarding process in a network. However, the two methods use different approaches.

CEF uses two components to optimize the lookup of the information required to route packets: the Forwarding Information Base (FIB) for the Layer 3 information and the adjacency table for the Layer 2 information.[7]

CEF creates an FIB by maintaining a copy of the forwarding information contained in the IP routing table. The information is indexed so that it can be quickly searched for matching entries as packets are processed. Whenever the routing table changes, the FIB is also changed so that it always contains up-to-date paths. A separate routing cache is not required.

The adjacency table contains Layer 2 frame header information, including next-hop addresses, for all FIB entries. Each FIB entry can point to multiple adjacency table entries, for example, if two paths exist between devices for load balancing.

After a packet is processed and the route is determined from the FIB, the Layer 2 next-hop and header information is retrieved from the adjacency table and a new frame is created to encapsulate the packet.

Cisco Express Forwarding can be enabled on a router (for example, on a Cisco 7500 Series router) or on a switch with Layer 3 functionality (such as the Catalyst 8540 switch).

Note

Not all Catalyst switches support Cisco Express Forwarding. Refer to specific product documentation on the Cisco website[8] for device support information.

Switching Security

In the past few years, switches have become equipped with features that make them more intelligent, allowing them to provide an active role in network security.

Cisco documentation refers to Catalyst integrated security (CIS). However, the term CIS refers only to built-in functionality that is native to the Catalyst switches, not to the security features inherent in the modules that can be installed in the switches (for example, firewall blades and so forth). Thus, in this book, we have categorized these two types of switch security as follows:

  • Catalyst native security—Those features built into the switch itself

  • Catalyst hardware security—Features of hardware that can be installed in the switch

These categories are described in the following sections.

Note

Refer to Chapter 4, “Network Security Design,” for general information on network security.

Catalyst Native Security

Cisco switches have many native attributes that can be used to secure a network.

Some attributes are related to the secure management of the switch itself. One example is the use of secure shell (SSH), rather than Telnet, when remotely managing the switch. Another example is disabling unused switch ports so that the network cannot be accessed through them.

Catalyst native security can protect networks against serious threats originating from the exploitation of MAC address vulnerabilities, ARP vulnerabilities, and Dynamic Host Configuration Protocol (DHCP) vulnerabilities. (Both ARP and DHCP are covered in Appendix B.) Table 2-1 shows some examples of the protection provided by the built-in intelligence in Catalyst switches.

Table 2-1. Examples of Built-In Intelligence to Mitigate Attacks

Attack

Native Security (Built-In Intelligence) to Mitigate Attacks

DHCP Denial of Service (DoS)

A DHCP DoS attack can be initiated by a hacker. As well as taking down the DHCP server, the attack could also be initiated from a server that is pretending to be a legitimate DHCP server. This rogue server replies to DHCP requests with phony DHCP information.

Trusted-State Port

The switch port to which the DHCP server is attached can be set to a “trusted” state. Only trusted ports are allowed to pass DHCP replies. Untrusted ports are only allowed to pass DHCP requests.

MAC Flooding

A hacker targets the switch’s MAC address table, to flood it with many addresses.

MAC Port Security

The switch can be configured with a maximum number of MAC addresses per port.

The switch can also be configured with static MAC addresses that identify the specific addresses that it should allow, further constraining the devices allowed to attach to the network.

Redirected Attack

A hacker wanting to cover his tracks and complicate the network forensics investigation might decide to compromise an intermediary target first. The hacker would then unleash his attack to the intended target from that intermediary victim.

Private VLAN (PVLAN)

The flow of traffic can be directed by using PVLANs. In the example shown in Figure 2-11, a PVLAN is defined so that traffic received on either switch port 2 or 3 can exit only by switch port 1. Should a hacker compromise server A, he would not be able to directly attack server B because the traffic can only flow between port 1 and port 2, and between port 1 and port 3. Traffic is not allowed to flow between port 2 and port 3.

Using a Switch to Create a PVLAN

Figure 2-11. Using a Switch to Create a PVLAN

Catalyst Hardware Security

Cisco switches can provide security, flexibility, and expandability to networks. As an example, the Catalyst 6500 Series switches can be equipped with modules that are full-fledged security devices themselves. Some example security modules are as follows:

  • Cisco Firewall service module

  • Cisco Internet Protocol security (IPsec) virtual private network (VPN) service module

  • Cisco Intrusion Detection System (IDS)

  • Cisco Secure Socket Layer (SSL)

Note

Refer to Chapter 4 for information on IPsec, VPNs, IDSs, and SSLs.

As an example of the flexibility provided by these modules, consider that when using a Cisco Firewall service module, any port on a Catalyst 6500 switch can operate as a firewall. An example of the expandability of the modules is the use of the IPsec VPN module. This module can terminate up to 8000 VPN connections (known as VPN tunnels) simultaneously and can create 60 new tunnels per second; up to 10 of these modules can be installed in a Catalyst 6500 switch.

Switching Design Considerations

Chapter 1, “Network Design,” introduces the hierarchical network design model and the Enterprise Composite Network Design model. Recall that the three functions that comprise the hierarchical network design model are the access layer, the distribution layer, and the core layer. The Enterprise Composite Network Model is the name given to the architecture used by the Cisco SAFE blueprint; it supports larger networks than those designed with only the hierarchical model and clarifies the functional boundaries within the network. Three functional areas exist within this model: Enterprise Campus, Enterprise Edge, and Service Provider Edge. Each of these functional areas contains network modules, which in turn can include the hierarchical layers.

Switches within the Enterprise Campus are in all three of the hierarchical layers. Layer 2 and/or Layer 3 switches can be used, depending on a number of factors.

For the access layer, design considerations include the following:

  • The number of end-user devices to be supported

  • The applications that are being used—this defines some of the features required in the switches, as well as the performance and bandwidth needed

  • The use of VLANs, including whether trunks are required between switches

  • Redundancy requirements

For the distribution layer, design factors include the following:

  • The number of access switches to be aggregated

  • Redundancy requirements

  • Features required for specific applications to be supported

  • Required interfaces to the core layer

  • For Layer 3 switches, the routing protocols to be supported and whether sharing of information among multiple routing protocols is required. (Routing protocols are discussed in detail in Chapter 3.)

The role of the core layer is to provide a high-speed backbone. Thus, the key requirement is the performance needed to support all the access and distribution data. The number of ports to the distribution layer, and the protocols (for example, routing protocols) that need to be supported on those ports, are also important considerations. Redundancy in the core is a typical requirement, to meet the availability needs of the network.

Cisco current campus design recommendations include the following:[9]

  • Layer 2 switches can be used at the access layer, with Layer 3 switches at the distribution and core layers.

  • VLANs should not spread across the campus, because this can slow network convergence.

  • The core and distribution layers can be combined into one layer (called a collapsed backbone) for smaller networks. Larger campuses should have a separate distribution layer to allow the network to grow easily.

  • Redundancy in the core, between the core and distribution layers, and between the distribution and access layers is also recommended. Redundancy can also be used within these layers as required.

Figure 2-12 illustrates a sample small network design that uses Layer 2 switches in the access layer of the campus Building and Server modules. This network features a collapsed backbone in Layer 3 switches. Redundancy is incorporated between all layers.

A Small Network Can Include a Collapsed Backbone

Figure 2-12. A Small Network Can Include a Collapsed Backbone

Figure 2-13 illustrates an example of a larger network design. Two buildings are shown, each with Layer 2 access switches and Layer 3 distribution switches. These buildings are then redundantly connected to the Layer 3 core. The Server module is shown with Layer 2 access switches connected directly to the core; distribution switches can be added if additional functionality or performance is required.

A Larger Network Has Separate Core and Distribution Switches

Figure 2-13. A Larger Network Has Separate Core and Distribution Switches

Summary

In this chapter, you learned about Layer 2 and Layer 3 switching network design, including the following topics:

  • How switches improve the performance of your network

  • The two types of switches: Layer 2 and Layer 3

  • The two implementations of Layer 3 switching within Cisco switches: multilayer switching and Cisco Express Forwarding

  • How the STP is critical in a Layer 2 switched environment to prevent loops

  • The usefulness of VLANs in defining logical broadcast domains

  • The features in switches that can be used to increase the security of your network

  • How switches fit into the design models

Endnotes

1.

Webb, Building Cisco Multilayer Switched Networks, Indianapolis, Cisco Press, 2001, p. 165.

2.

“Virtual LANs/VLAN Trunking Protocol (VLANs/VTP),” http://www.cisco.com/en/US/tech/tk389/tk689/tsd_technology_support_protocol_home.html.

3.

“Troubleshooting IP Multilayer Switching,” http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a00800f99bc.shtml.

4.

Cisco switch products home page, http://www.cisco.com/en/US/products/hw/switches/index.html.

5.

Cisco router products home page, http://www.cisco.com/en/US/products/hw/routers/index.html.

6.

“Troubleshooting IP Multilayer Switching,” http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a00800f99bc.shtml.

7.

“Cisco Express Forwarding Overview,”http://www.cisco.com/univercd/cc/td/doc/product/software/ios122/122cgcr/fswtch_c/swprt1/xcfcef.htm.

8.

Cisco switch products home page, http://www.cisco.com/en/US/products/hw/switches/index.html.

9.

“Hierarchical Campus Design At-A-Glance,”http://www.cisco.com/application/pdf/en/us/guest/netsol/ns24/c643/cdccont_0900aecd800d8129.pdf.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset