Chapter 2. Network Access and Layer 2 Multicast

Chapter 1, “Introduction to IP Multicast,” examined the differences between unicast, broadcast, and multicast messages. This chapter takes an in-depth look at IP multicast messages at Layer 2 and how they are transported in a Layer 2 domain. This chapter covers the basic elements of multicast functionality in Layer 2 domains as well as design considerations for multicast deployments.

Layered Encapsulation

Before reviewing multicast in Layer 2, we must discuss fundamental packet-forwarding concepts to establish a baseline of the process. Encapsulation is an important component of the OSI model for data communication and is absolutely essential in IP networks. Encapsulation is the method by which information is added at each layer of the OSI reference model, used for processing and forwarding purposes. Think of it like an onion, with many layers. This information is added in the form of headers and footers. At each layer of the OSI reference model, the data is processed, encapsulated, and sent to the next layer. Take a look at Figure 2-1 to understand how this works.

Image

Figure 2-1 OSI Model and Data Encapsulation

Data from the application, presentation, and session layers (Layers 1, 2, and 3) is encapsulated at Layer 4 with transport protocol information. This information is encapsulated in TCP and/or UDP with specific port numbers. For example, TCP port 80 is typically web traffic. This allows an operating system to forward data to an appropriate application or subroutine. Layer 3 adds logical forwarding details (source and destination IP addresses) so that networking devices can determine the best path toward a destination. Finally, Layer 2 adds hardware forwarding information (source and destination MAC addresses). This allows data to be passed physically to the appropriate machine or the next hop in a network. A data transmission at Layer 4 is called a segment, a packet at Layer 3, and a frame at Layer 2.

At each step through a network, routers and switches use these encapsulation headers and footers to make decisions about how to treat data transmissions. The diagrams in Chapter 1 show much of this processing in action. In that chapter, you learned the uniqueness of IP multicast packets compared to unicast and broadcast packets. From a Layer 2 perspective, this is only part of the story.

To better explain what happens to a packet traversing the network, we will walk you through the Layer 2 and Layer 3 transmission process. Figure 2-2 illustrates the first step in this process. Before the sender has the ability to encapsulate any data, it must first understand where the destination is located. The sender performs a simple check to verify if the receiving device is on the same subnet. This is accomplished by checking the destination IP address against the local subnet. In this example, the receiver is not on the same subnet. The sender must use the configured route to the destination. In most cases, this is the default-gateway.

Image

Figure 2-2 Layer 2 and Layer 3 Transport Process on the Local Segment

Before the sender can communicate with the default gateway, it must know the media access control (MAC) address of that device. Because the destination is on a different segment, the sender will need to discover the MAC address of the default gateway (IP address 10.1.1.1) using an Address Resolution Protocol (ARP) request. The default gateway responds to the ARP request with its MAC address. Finally, the sender has enough information to encapsulate the data with the destination IP address of Host A and the MAC addresses of the default gateway, as shown in Figure 2-2.

The default gateway or router has Layer 3 IP routing information that determines where Host A is physically connected. This information determines the appropriate outgoing interface to which the message should be sent. The router should already know the MAC address of the neighbor router if there is an established routing protocol adjacency. If not, the same ARP request process is conducted. With this information, the router can now forward the message. Understand that both Layer 2 addresses (SA and DA) change at each logical hop in the network, but the Layer 3 addresses never change and are used to perform route lookups.

When the packet is forwarded to the final router, that router must do a lookup and determine the MAC address of the destination IP. This problem exists in part because of the historical implications of Ethernet. Ethernet is a physical medium that is attached to a logical bus network. In a traditional bus network, many devices can be connected to a single wire. If the gateway router does not have an entry from a previous communication, it will send out an ARP request and finally encapsulate with the destination MAC address of the host, as shown in Figure 2-3.

Image

Figure 2-3 Layer 2 and Layer 3 Transport Process on the Destination Router

After the final router properly encapsulates the message, it is the responsibility of the switch to send the packet to the appropriate host, and only to that host. This is one of the primary functions of a traditional Layer 2 switch—to discover the location of devices connected to it. It does this by cataloging the source MAC addresses in frames received from connected devices. In this way, the switch builds a table of all known MAC addresses and keeps Ethernet networks efficient by making intelligent Layer 2 forwarding decisions.

This process is easy to understand for the unicast packet shown. Items to consider while you read this chapter include the following:

Image What happens if the packet is a multicast packet and many hosts connected to a switch are subscribed to the destination multicast group?

Image Can a switch still make efficient forwarding decisions if there are multiple ports that require a copy of the packet (meaning there are multiple endpoints on multiple segments that need a copy of the frame)?

Image If the MAC address in a frame is not the physical address of the host, will it process the packet, assuming it is not the intended recipient?

Image How do you identify multicast groups at Layer 2?

MAC Address Mapping

A traditional Ethernet switch (Layer 2 device) works with Ethernet frames, and a traditional router (Layer 3 device) looks at packets to make decisions on how messages will be handled. As discussed in Chapter 1, when a device sends a broadcast frame, the destination address is all ones, and a unicast message is the destination MAC address. What happens when it is a multicast message? To optimize network resources, an Ethernet switch also needs to understand multicast. This is where the magic happens. The sending device must convert the destination IP multicast address into a special MAC address as follows:

Image The high-order 25 bits is the official reserved multicast MAC address range from 0100.5E00.0000 to 0100.5E7F.FFFF (request for Comment 1112). These bits are part of the organizational unit identifiers (OUI).

Image The lower-order 23 bits of the destination IP multicast address are mapped to the lower-order 23 bits of the MAC address.

Image The high-order 4 bits for the destination IP multicast address are set to 1110 binary (0b1110). This represents the Class D address range from 224.0.0.0 (0b11100000) to 239.255.255.255 (0b11101111).

Image Of the 48 bits used to represent the multicast MAC address, the high-order 25 bits are reserved as part of the OUI, and the last 23 bits of the multicast IP address are used as the low-order bits, as shown in Figure 2-4.

Image

Figure 2-4 Layer 2 Multicast Address Format

A switch can use this calculated multicast MAC address to distinguish a frame as a multicast and make efficient forwarding decisions. End hosts can listen for frames with a specific multicast MAC, allowing them to process only those multicast streams to which they have subscribed. There’s a small wrinkle in this process, however.

Did you notice a slight challenge with the number of IP addresses and MAC addresses? Five bits of the IP address are overwritten by the OUI MAC address. This causes a 32-to-1 IP multicast address-to-multicast MAC address ambiguity (25 = 32).

This means that a host subscribing to a multicast stream could potentially receive multiple multicast streams that it did not subscribe to, and the host will have to discard the unwanted information. A host subscribing to the multicast stream of 224.64.7.7 would map to MAC address of 0x0100.5E40.0707, and so would 225.64.7.7 and 224.192.7.7. It all boils down to 1s and 0s. Figure 2-5 shows the ambiguity. The “X” in the binary row represents the bits that are overwritten and shows how 32 multicast IP addresses map to a single multicast MAC address.

Image

Figure 2-5 Layer 2 Multicast MAC Address Overlap

Switching Multicast Frames

Layer 2 switches send frames to a physical or logical interface based on the destination MAC address. Multicast MAC addresses are a different animal than unicast MAC addresses, because a unicast MAC address should be unique and have only a single destination interface. Multicast MAC frames may have several destination interfaces, depending upon which devices have requested content from the associated IP multicast stream.

Before the Layer 2 switch can forward multicast frames, it must know the destination interfaces on which those messages should be sent. The list of destination interfaces includes only those interfaces connected to a device subscribed to the specific multicast flow. The destination can be added as static entries that bind a port to a multicast group, or the switch can use a dynamic way of learning and updating the ports that need to receive the flow.

There are several ways in which a Layer 2 switch can dynamically learn where the destinations are located. The switch may use Cisco Group Management Protocol (CGMP) or Internet Group Management Protocol (IGMP) snooping for IPv4 multicast. These methods will be discussed later in this chapter.

If a Layer 2 switch does not have a mechanism to learn about where to send multicast messages, it treats all multicast frames as broadcast, which is to say it floods the packet on every port or VLAN port! As you can imagine, this is a very bad thing. Many networks have melted down due to large multicast streams. For example, when sending computer operating system image files, a tremendous amount of data is sent to every device in the broadcast domain, every computer, router, printer, and so on. The unfortunate side effect of these messages is that network performance may be affected in locations on the network that do not need the multicast stream. How could this happen if these are broadcast messages and will not go beyond the local network? These messages will not go beyond any local Layer 3 devices, but local Layer 3 devices must process each one of the broadcast messages. While the Layer 3 device is inundated processing these messages, it may not have the available cycles to process other more important messages, such as routing updates or spanning-tree messages. As you can imagine, or may have already experienced, this can impact or “melt-down” the entire network.

Group Subscription

You have seen that in order for IP multicast forwarding to work on the local segment and beyond, switches and gateway routers need to be aware of multicast hosts interested in a specific group and where those hosts are located. Without this information, the only forwarding option is to flood multicast datagrams throughout the entire network domain. This would destroy the efficiency gains of using IP multicast.

Host group membership is a dynamic process. When a host joins a multicast group, there is no requirement to continue forwarding group packets to the segment indefinitely, nor is group membership indefinite. The only way to manage alerting the network to a multicast host location is to have multicast host group members advertise interest or membership to the network. Figure 2-6 depicts a high-level example of this requirement, known as a join.

Image

Figure 2-6 Host Joins a Multicast Group

A Layer 3 gateway provides access to the larger network for hosts on a given subnet. The gateway is the network demarcation between Layers 2 and 3 and is the most appropriate device to manage host group membership for the larger network. Hosts forward group management information, like joins, to the network. The gateway receives these management messages and adds host segment interfaces to the local multicast table (multicast forwarding information base [FIB]). After the FIB is updated, the gateway router communicates group interest using protocol independent multicast (PIM) to the larger network domain.

It is important to note that without multicast-aware Layer 2 protocols, all hosts on a given Layer 2 segment will receive multicast packets for any groups joined by a host on that segment. For this reason, it is also logical that hosts and routers have the capability to dynamically leave a group or to prune a group from a particular segment. Figure 2-7 describes a high-level example of this process in action, known as a leave.

Image

Figure 2-7 Host Leaves a Multicast Group

Administrators can configure the gateway router to statically process joins for specific groups using router interfaces. This alleviates the need to have a dynamic join/leave process; however, having a dynamic process simplifies the operational aspects for the administrator. In the next section, we show you the dynamic process needed to get this intelligence to the Layer 2 networks.

IGMP on the Gateway Router

Internet Group Management Protocol, or IGMP, is the protocol used to manage group subscription for IPv4 multicast. On the gateway router, called the querier, IGMP is used to track multicast group subscriptions on each segment. The router sends query messages to discover the hosts that are members of a group. The hosts send membership report messages to inform the router that they are interested in receiving or leaving a particular multicast stream, and they also send report messages in response to a router query message.


Note

When protocol independent multicast (PIM) is enabled on an interface of the router, IGMP (version 2) is also enabled.


IGMP Versions

The selection of which IGMP version(s) to run on your network is dependent on the operating systems and behavior of the multicast application(s) in use. Generally speaking, the capability of the operating system determines the IGMP version(s) you are running on your network. There are three versions of IGMP, version 1, 2, and 3. Each of these has unique characteristics. As of this writing, the default IGMP version enabled on most Cisco devices is version 2.

IGMPv1

The original specification for IGMP was documented in RFC 988 back in 1986. That RFC, along with RFC 1054, was made obsolete by RFC 1112, which is known as IGMPv1 today. IGMPv1 offers a basic query-and-response mechanism to determine which multicast streams should be sent to a particular network segment.

IGMPv1 works largely like the explanation given in Figure 2-7, with two major exceptions, a primary issue with using version one. IGMPv1 has no mechanism for a host to signal that it wants to leave a group. When a host using IGMPv1 leaves a group, the router will continue to send the multicast stream until the group times out. As you can imagine, this can create a large amount of multicast traffic on a subnet if a host joins groups very quickly. This will occur if the host is “channel-surfing” using IPTV, for example.

In order to determine the membership of a group, the querier (router) sends a message to every host on the subnet. The functionality of the querier is to maintain a list of hosts in the subnet interested in multicast flows. Yes, even those that were never interested in receiving any multicast streams. This is accomplished by sending the query to the “all-hosts” multicast address of 224.0.0.1. When a single host responds to the query, all others suppress sending a report message.

IGMPv1 also does not have the capability of electing a querier. If there are multiple queriers (routers) on the subnet, a designated router (DR) is elected using PIM to avoid sending duplicate multicast packets. The elected querier is the router with the highest IP address. IGMPv1 is rarely used in modern networks and the default for Cisco devices has been set to v2 because of these limitations.

IGMPv2

As with every invention, we make improvements as we find shortcomings. IGMPv2, as defined in RFC 2236, made improvements over IGMPv1. One of the most significant changes was the addition of the leave process. A host using IGMPv2 can send a leave-group message to the querier indicating that it is no longer interested in receiving a particular multicast stream. This eliminates a significant amount of unneeded multicast traffic by not having to wait for the group to timeout; the trade-off is that routers need to track membership to efficiently prune when required.

IGMPv2 added the capability of group-queries. This feature allows the querier to send a message to the host(s) belonging to a specific multicast group. Every host on the subnet is no longer subjected to receiving a multicast message.

The querier election process offers the capability to determine the querier without having to use PIM. In addition, the querier and the DR function are decoupled. This process requires that each device send a general query message to all hosts 224.0.0.1. If there are multiple routers on a subnet, the DR is the device with the highest IP address and the querier is the device with the lowest IP address.

IGMPv2 also added the Maximum Response Time field, which is used to tune the query-response process to optimize leave latency.

Food for thought: Is a multicast message sent to all-host 224.0.0.1 a broadcast?

Figure 2-8 shows the format for IGMPv1 and IGMPv2 messages.

Image

Figure 2-8 IGMPv1 and IGMPv2 Message Format

IGMP message types for IGMPv1 and IGMPv2 are as follows:

Image 0x11—Membership query

Image General query message used to determine group membership of any group

Image Group-specific query used to verify if any hosts are part of a specific group

Image 0x12—IGMPv1 membership report

Image 0x16—IGMPv2 membership report

Image 0x17—Leave-group message

The maximum response time (MRT) is calculated in one-tenth of a second increments and is used only with membership query messages. This parameter allows routers to manage the time between the moment the last host leaves a group and the moment the routing protocol is notified. When a host receives an IGMP query packet, it kicks off a timer that begins with a random value that is less than the MRT. If no other host responds with a membership report before this random timer expires, the host will then reply with a report. This decreases the number of total IGMP reports needed to maintain the group state as well as preserves local bandwidth, because the host suppresses its own reports unless absolutely necessary. IGMPv1 does not use MRT; instead, it has a timer that is always set to 10 seconds. Of course, this means the MRT cannot be less than the query-interval, making the maximum configurable MRT 25 seconds (1 byte MRT field; 1/10s*255 = 25 seconds).

The checksum is a value calculated using information within the message used to detect errors.

Example 2-1 shows a packet capture of an IGMPv2 membership query. Items of interest include the source and destination MAC address. The source of this request is the router (192.168.12.1) and the destination is the multicast MAC address for 224.0.0.1, which includes all devices on the subnet. Referring to the packet capture in Example 2-1, you see the IGMP type is 0x11, the maximum response time is 0x64 (hex for 10 seconds, the default for IGMPv2), the checksum, and the group address of 0.0.0.0, which indicates that it is a general query message. Also, pay particular attention to the time to live (TTL) field. This message has the TTL set to 1, which means that it will not be sent to multiple subnets. If you are troubleshooting multicast problems, you should always make sure the multicast sender has a TTL value greater than or equal to the diameter of your network.

Example 2-1 IGMPv2 Membership Query Packet Capture


Ethernet Packet:  60 bytes
      Dest Addr: 0100.5E00.0001,   Source Addr: 0022.5561.2501
      Protocol: 0x0800

IP    Version: 0x4,  HdrLen: 0x6,  TOS: 0xC0 (Prec=Internet Contrl)
      Length: 32,   ID: 0x03E6,   Flags-Offset: 0x0000
      TTL: 1,   Protocol: 2 (IGMP),   Checksum: 0x7387 (OK)
      Source: 192.168.12.1,     Dest: 224.0.0.1

      Options: Length = 4
      Router Alert Option: 94 0000

IGMP  VersionType: 0x11,  Max Resp: 0x64,  Checksum: 0xEE9B (OK)

Version 2 Membership Query
      Group Address: 0.0.0.0


Remember that IGMP is a LAN-based protocol, used to manage hosts. Managing hosts is often considered a chatty process. Several configurable timers, including the MRT, within the IGMP implementation can be adjusted to modify protocol message timing and processing. Look at the IGMP interface configuration timers that are listed in the show ip igmp interface x/x command output in Example 2-2.

Example 2-2 show ip igmp interface Command Output


Router#show ip igmp interface e1/0
Loopback0 is up, line protocol is up
  Internet address is 192.168.2.2/32
  IGMP is enabled on interface
  Current IGMP host version is 2
  Current IGMP router version is 2
  IGMP query interval is 60 seconds
  IGMP configured query interval is 60 seconds
  IGMP querier timeout is 120 seconds
  IGMP configured querier timeout is 120 seconds
  IGMP max query response time is 10 seconds
  Last member query count is 2
  Last member query response interval is 1000 ms
  Inbound IGMP access group is not set
  IGMP activity: 3 joins, 0 leaves
  Multicast routing is enabled on interface
  Multicast TTL threshold is 0
  Multicast designated router (DR) is 192.168.2.2 (this system)
  IGMP querying router is 192.168.2.2 (this system)
  Multicast groups joined by this system (number of users):
      224.0.1.40(1)  224.0.1.39(1)  239.1.1.1(1)


The respective timers in this output are all using implementation default values. In generic multicast deployments, these timers are not tweaked and are kept “default.” Administrators may tweak them based on specific application requirements (not commonly seen). It is beneficial to understand the functionality of these timers:

Image ip igmp query-interval [interval in secs]: Hosts on a segment will send a report of their group membership in response to queries received from the IGMP querier. The query interval defines the amount of time the router will store the IGMP state if it does not receive a report for the particular group. This hold period is three times the query interval time.

Image ip igmp query-max-response-time [time-in-seconds]: When a host receives a query from the IGMP querier, it starts the countdown of the maximum response time before sending a report to the router. This feature helps reduce the chatter between hosts and the first hop router. The max-response time cannot be less than the query interval value.

Image ip igmp query-timeout [timeout]: This timer is used for the querier election process described earlier, especially when multiple routers are in the LAN segment. A router that loses the election will assume quierier malfunction based on the expiry of this timer. When the timer is expired, the router restarts the querier election process.

Image ip igmp last-member-query-count [number]: This timer tracks the time the router must wait after the receipt of the leave message before removing the group state from local state tables. The timer is overwritten if a router is configured with the command ip igmp immediate-leave group-list [list]. With the ip igmp immediate-leave group command, the router treats these groups as having a single host member. After the reception of a leave message, the router immediately removes the multicast group.

IGMPv3

The addition of IGMPv3 (RFCs 3376 and 4604) brought with it signification changes over IGMPv1 and v2. Although there are vast improvements, backward compatibility between all three versions still exists. To understand why, examine Figure 2-9, which shows the IGMPv3 header format. New header elements of importance include a Number of Sources field, a Source Address(es) field, and a change from a Max Response Time field to a Max Response Code field.

Image

Figure 2-9 IGMPv3 Message Format

As the header shows, the most signification addition to IGMPv3 is the capability to support specific source filtering. Why is this a big deal? With IGMPv1 and v2, you could not specify the host from which you wanted to receive a multicast stream; consequently, multiple sources could be sending to the same multicast IP address and port number, and the host would now have a conflict with which stream to receive. Source filtering allows the host to signal membership with either an include or an exclude group list. This way, the host can specify which device(s) it is interested in receiving a stream from, or it can indicate which devices that it is not interested in receiving a stream from. This adds an additional security component that can be tapped at the application level. IGMPv3 is used at Layer 2 for source-specific multicast (SSM). SSM is covered in Chapter 3.

In addition to this change, the MRT was updated once again in IGMPv3; in fact, it was changed in RFC 3376 to a maximum response code (MRC). Similar to the MRT field in IGMPv2, the max response code field indicates the maximum time allowed before a report for a group must be received. The maximum response code (MRC) can still incorporate the MRT, which is represented in units of one-tenth of a second. There are 8 bits in the MRC field, and the value of those bits indicates how the MRC is to be read. If the MRC is less than 128, the Max Response Time is equal to the Max Response Code value. If the MRC is greater than or equal to 128, the MRC has a floating point value to reflect much longer periods of time. This makes the total maximum timer configurable up to 55 minutes.

The response time was modified in IGMPv3 to better accommodate different types of network connectivity. Using a smaller timer allows the network administrator to more accurately tune the leave latency of hosts. Using a larger timer can accommodate network types where the burstiness of group management traffic is less desirable, e.g. low bandwidth wireless networks.

Example 2-3 shows a packet capture of a membership report from an IGMPv3 host with the IP address of 192.168.7.14 with a group membership request to receive a multicast stream from 224.64.7.7 from the source of 192.168.8.10.

Example 2-3 IGMPv3 Membership Report Packet Capture


Ethernet II, Src: (80:ee:73:07:7b:61), Dst: (01:00:5e:00:00:16)
    Type: IP (0x0800)
Internet Protocol Version 4, Src: 192.168.7.14, Dst: 224.0.0.22
    Version: 4
    Header length: 24 bytes
    Differentiated Services Field: 0xc0 (DSCP 0x30: Class Selector 6; ECN: 0x00:
  Not-ECT (Not ECN-Capable Transport))
    Total Length: 52
    Identification: 0x0000 (0)
    Flags: 0x02 (Don't Fragment)
    Fragment offset: 0
    Time to live: 1
    Protocol: IGMP (2)
    Header checksum: 0x3c37 [validation disabled]
    Source: 192.168.7.14
    Destination: 224.0.0.22
    Options: (4 bytes), Router Alert
Internet Group Management Protocol
    [IGMP Version: 3]
    Type: Membership Report (0x22)
    Header checksum: 0x4a06 [correct]
    Num Group Records: 2
    Group Record : 224.64.7.7  Mode Is Include
        Record Type: Mode Is Include (1)
        Aux Data Len: 0
        Num Src: 1
        Multicast Address: 224.64.7.7 
        Source Address: 192.168.8.10 
    Group Record : 224.0.0.251  Mode Is Exclude
        Record Type: Mode Is Exclude (2)
        Aux Data Len: 0
        Num Src: 0
        Multicast Address: 224.0.0.251


Notice the destination IP address of the IPv4 packet; it is being sent to 224.0.0.22. This is the IP address to which all hosts send their membership report.

Configuring IGMP on a Router

A router by default is not configured to support multicast traffic. There are a few basic configuration tasks that need to be performed on Cisco IOS routers in order to enable multicast on the network.

Step 1. Enable multicast routing on the router in global configuration mode.

ip multicast-routing

Step 2. Configure PIM routing on the associated interface(s) in the interface configuration mode. Take into consideration that multicast traffic may traverse many different paths and it is always a good idea to enable it on every interface that may need to participate. This might save you some troubleshooting time in the future. When you configure an interface for PIM, it is automatically configured to use IGMPv2 on most current operating systems.

ip pim sparse-mode

On Cisco switches, IGMP version 2 is enabled by default.

Step 3. If necessary, change the IGMP version supported on an interface.

ip igmp version [1,2,3]

Mixed Groups: Interoperability Between IGMPv1, v2, and v3

It all boils down to the least common denominator. When there is a mix of hosts on a subnet, the only method for them to communicate is with the lowest IGMP version number used by a host. This is due to the fact that higher versions understand lower versions for backward compatibility.

The other situation you may encounter is having a mix of clients and several routers on a subnet. There is no mechanism for routers configured with a lower IGMP version to detect a router running a higher version IGMP. This requires manual intervention, and someone must configure the routers to use the same version.

Layer 2 Group Management

As mentioned earlier, Layer 2 devices treat multicast messages as broadcasts when no group management mechanism is present. This not only causes an increase of traffic on a particular subnet, but these messages are sent to every device within that subnet (flooding). These devices may treat the processing of multicast messages differently, depending on the behavior of the operating system and associated hardware. Multicast messages may be processed in hardware, software, or the combination of both. Consequently, multicast messages, or too many multicast messages, may have an adverse effect on a device. It is better to handle multicast messages in the network and only send messages to the devices that are interested in receiving them.

Two protocols were created to help manage this behavior on LAN segments, Cisco Group Management Protocol (CGMP) and Router-port Group Management Protocol (RGMP). While both of these protocols still exist in networks today, they are generally not used in favor of IGMP snooping, which is discussed in the next section. For this reason, we only provide a brief introduction to these protocols.

Cisco Group Management Protocol

In order to address the Layer 2 challenges with multicast as a broadcast message, Cisco first developed a proprietary solution called CGMP. At the time of development, the capabilities of Layer 2 switches did not offer the Layer 3 inspection or snooping of messages as they do today. CGMP was used between the connected router and switch. The router would send IGMP information to the switch using CGMP, regarding which clients have registered. The receiving switch could then determine which interfaces to send the outgoing multicast messages.

CGMP behaves in the following manner: When a host is interested in receiving a stream from a particular Group Destination Address (GDA), it sends an IGMP report message. This messages is received by the router and the router in turn sends a CGMP Subnetwork Access Protocol (SNAP) frame to the destination MAC address of 0x0100.0CDD.DDD with the following information:

Image Version: 1 or 2

Image Message field: Join or Leave

Image Count: Number of address pairs

Image MAC address of the IGMP client

Image Group multicast address/Group destination address (GDA)

Image Unicast source address (USA): MAC address of the host sending the join message

The attached switch, configured to support CGMP, receives the frame from the router. The switch looks at the USA and performs a lookup in the content addressable memory (CAM) table to determine the interface of the requesting host. Now that the interface of the requesting host has been determined, the switch places a static entry for the GDA and links the host address in the CAM table if this is the first request for the GDA. If there was already an entry for the GDA, the switch just adds the USA to the GDA table.


Note

Due to conflicts with other protocols such as HSRPv1, CGMP may have certain features disabled. Always check current configuration guides before enabling CGMP. Using IGMP snooping is an easy way to avoid such conflicts.


The CGMP Leave Process

The leave process is dependent on the IGMP version of the host. IGMPv1 does not provide a mechanism to inform the network when it no longer wants to participate in receiving the stream. When an IGMPv1 host leaves, the only way the network realizes that the host is no longer participating in the multicast stream is through an IGMP query message. You can imagine that having a host join a stream, then join another stream, and so on, can cause a significant impact on the amount of traffic on the network. Consider someone watching IPTV and quickly changing channels. To address this problem, the router periodically sends IGMP query messages to determine if devices are still interested in receiving the stream. If a router does not receive a response after sending three IGMP query messages, it informs the switch via CGMP, which then removes the entries for the GDA.

IGMPv2 added the functionality of a leave message; this provides the ability for a host to gracefully leave a session that it is no longer interested in receiving. When a host sends an IGMP leave message, the router sends a query message and starts a query-response message timer. This process is done to determine if there are hosts on that specific network that are still interested in receiving the multicast stream. If the router does not receive a response, it will send a CGMP message to the switching informing it to remove the entries for the GDA.

Router-Port Group Management Protocol

Along the same lines as CGMP, another protocol, RGMP, was developed to address the multicast communication of routers over a switched network. When several routers are connected to the same Layer 2 switched network, multicast messages are forwarded to all protocol independent multicast (PIM) routers, even those that are not interested in receiving the multicast streams.

RGMP is configured on the switch in conjunction with IGMP snooping (IGMP snooping is discussed at length in the next section of this chapter). The routers that are connected to the switch are configured with PIM sparse-mode and RGMP. A router configured with RGMP sends a RGMP hello messaged to the attached switch. The switch creates an entry indicating the receiving interface is a RGMP router and will not forward multicast traffic to that interface unless it receives a join message from the router. A router interested in receiving a specific multicast stream sends a RGMP join message to the switch with the GDA. The switch in turn creates an entry for the GDA and links the router interface in the CAM table.

We have covered two of the four RGMP message types, “hello” and “join.” The other messages are “bye” and “leave.” The “bye” RGMP message tells the switch to place the interface in a normal forwarding state. Finally, the “leave” message is sent from the router when it no longer is interested in receiving a specific multicast stream.

Snooping

According to the Merriam-Webster dictionary, snooping is “to look or pry especially in a sneaking or meddlesome manner.” When we use this term referring to multicast, it means the same thing, with the exception of the meddlesome manner. When a device monitors the conversation or messages sent between devices on the network, we can gain a great deal of information which can be used in turn to tune network behavior to be much more efficient. Over the last several years, Cisco has made great improvements in the intelligence of the components that make up a switch. Switches can now perform Layer 3 services, capture analytics, rewrite information, and so on, all at line rate. This increased intelligence has now provided the capability of a Layer 2 switch to look at more than just the destination MAC address; it offers the capability to look deep into the message and make decisions based on Open Systems Interconnect (OSI) Layer 2 to L7 information.

IGMP Snooping

IGMP snooping is one of those features that does exactly what it says. A network component, generally a Layer 2 switch, monitors frames from devices, and, in this case, it listens specifically for IGMP messages. During the snooping process, the switch listens for IGMP messages from both routers and hosts. After discovering a device and determining which GDA that particular device is interested in, the switch creates an entry in the CAM table that maps the GDA to the interface.

Switches learn about routers using several mechanisms:

Image IGMP query messages

Image PIMv1 and/or PIMv2 hellos

Examine Figure 2-10 because it will be used to explain IGMP snooping.

Image

Figure 2-10 IGMP Snooping

Example 2-4 shows a packet capture of an IGMP query report generated from a router.

Example 2-4 IGMP Query Packet Capture


Ethernet Packet:  60 bytes
      Dest Addr: 0100.5E00.0001,   Source Addr: 0C85.2545.9541
      Protocol: 0x0800

IP    Version: 0x4,  HdrLen: 0x6,  TOS: 0xC0 (Prec=Internet Contrl)
      Length: 32,   ID: 0x1FC5,   Flags-Offset: 0x0000
      TTL: 1,   Protocol: 2 (IGMP),   Checksum: 0x57A8 (OK)
      Source: 192.168.12.1,     Dest: 224.0.0.1

      Options: Length = 4
      Router Alert Option: 94 0000

IGMP  VersionType: 0x11,  Max Resp: 0x64,  Checksum: 0xEE9B (OK)

Version 2 Membership Query
      Group Address: 0.0.0.0


When the switch determines there is a router attached, it places an entry in the IGMP snooping table that specifies the interface, as Example 2-5 demonstrates.

Example 2-5 IGMP Snooping Table


Switch#show ip igmp snooping mrouter
Vlan    ports
----    -----
  12    Gi0/12(dynamic)


In this case, the switch has learned that a router is attached to interface g0/12. This interface is known as the mrouter port or multicast router port. The mrouter port is essentially a port that the switch has discerned is connected to a multicast enabled router that can process IGMP and PIM messages on behalf of connected hosts. An IGMP-enabled VLAN or segment should always have an mrouter port associated with it. We can also see the effect using the debug ip igmp snooping router command, which gives us greater insight into the process, as Example 2-6 demonstrates.

Example 2-6 debug ip igmp snooping router Output


Switch#debug ip igmp snooping router
router debugging is on
01:49:07: IGMPSN: router: Received non igmp pak on Vlan 12, port Gi0/12
01:49:07: IGMPSN: router: PIMV2 Hello packet received in 12
01:49:07: IGMPSN: router: Is not a router port on Vlan 12, port Gi0/12
01:49:07: IGMPSN: router: Created router port on Vlan 12, port Gi0/12
01:49:07: IGMPSN: router: Learning port: Gi0/12 as rport on Vlan 12


As you see from the output in Example 2-6, the switch received a PIMv2 hello packet from interface Gi0/12 and changed the state of the port to a router port.

When a host connected to the switch wants to join a multicast stream, it sends an IGMP membership report. In Example 2-7, the host connected to port Gi0/2 is interested in receiving data from 224.64.7.7. Using the debug ip igmp snooping group command, we can monitor the activity.

Example 2-7 debug ip igmp snooping group Output


Switch#debug ip igmp snooping group
router debugging is on
01:58:47: IGMPSN: Received IGMPv2 Report for group 224.64.7.7 received on Vlan 12,
  port Gi0/2
01:58:47: IGMPSN: group: Adding client ip 192.168.12.20, port_id Gi0/2, on vlan 12


From the output in Example 2-7, we can ascertain that the host connected to Gi0/2 is attempting to connect to the multicast group 224.64.7.7.

Using the show ip igmp snooping groups command, we can also see the entry in the switch, as demonstrated in Example 2-8.

Example 2-8 show ip igmp snooping groups Output


Switch#show ip igmp snooping groups
Vlan      Group                    Type        Version     Port List
-----------------------------------------------------------------------
12        224.0.1.40               igmp        v2          Gi0/12
12        224.64.7.7               igmp        v2          Gi0/2, Gi0/12


The output in Example 2-8 specifies the VLAN, multicast group, IGMP version, and ports associated with each group.

The packet capture in Example 2-9 shows the membership report generated from the host with the MAC address of 0x000F.F7B1.67E0. Notice how the destination MAC and destination IP are those of the multicast group the host is interested in receiving. The IGMP snooped mrouter port entry ensures this IGMP membership report is forwarded to the multicast router for processing, if necessary. See the next section on maintaining group membership.

Example 2-9 IGMP Membership Report Packet Capture


Ethernet Packet:  60 bytes
      Dest Addr: 0100.5E40.0707,   Source Addr: 000F.F7B1.67E0
      Protocol: 0x0800

IP    Version: 0x4,  HdrLen: 0x5,  TOS: 0xC0 (Prec=Internet Contrl)
      Length: 28,   ID: 0x0000,   Flags-Offset: 0x0000
      TTL: 1,   Protocol: 2 (IGMP),   Checksum: 0x051D (OK)
      Source: 192.168.12.20,     Dest: 224.64.7.7

IGMP  VersionType: 0x16,  Max Resp: 0x00,  Checksum: 0x02B8 (OK)

Version 2 Membership Report
      Group Address: 224.64.7.7


The output in Example 2-10 shows several hosts connecting to the multicast group.

Example 2-10 show ip igmp snooping groups Output


Switch#show ip igmp snooping groups
Vlan      Group                    Type        Version     Port List
-----------------------------------------------------------------------
12        224.0.1.40               igmp        v2          Gi0/15
12        224.64.7.7               igmp        v2          Gi0/1, Gi0/2,
                                                           Gi0/4, Gi0/15


Maintaining Group Membership

As hosts are added to or removed from the multicast group, the switch manages the interaction. The switch does not notify the router of any additions or removals to the group, with the exception of the last host. If there is only one host and it leaves the multicast group, the switch immediately sends a group leave message to the upstream router. One of the interesting aspects of this message is that the switch spoofs the IP address of the last client. Look carefully at the output in Example 2-11.

Example 2-11 IGMP Leave Capture Output


Ethernet Packet:  60 bytes
      Dest Addr: 0100.5E00.0002,   Source Addr: 0013.19C6.A60F
      Protocol: 0x0800

IP    Version: 0x4,  HdrLen: 0x6,  TOS: 0xC0 (Prec=Internet Contrl)
      Length: 32,   ID: 0x0000,   Flags-Offset: 0x0000
      TTL: 1,   Protocol: 2 (IGMP),   Checksum: 0x7745 (OK)
      Source: 192.168.12.40,     Dest: 224.0.0.2

      Options: Length = 4
      Router Alert Option: 94 0000

IGMP  VersionType: 0x17,  Max Resp: 0x00,  Checksum: 0x01B8 (OK)

Version 2 Leave Group
      Group Address: 224.64.7.7


The benefit of this behavior is that when the last device leaves the multicast group, the router does not have to wait for a timeout. Notice also that the MAC address of the source in the packet in Example 2-11 is the MAC address of the switch as depicted in the show interface Gi0/12 output in Example 2-12. This is the mrouter interface for this segment.

Example 2-12 show interface Output


Switch#show interface Gi0/12
GigabitEthernet0/12 is up, line protocol is up
  Hardware is Gigabit Ethernet, address is 0013.19C6.A60F (bia 0013.19C6.A60F)
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 1000 usec,


Configuring IP IGMP Snooping

The configuration could not be easier for newer Catalyst or Nexus product switches. The command is as follows:

C2970(config)#ip igmp snooping

Here is the best part—it is on by default and obviously not necessary to type the previous command. You can confirm that IGMP snooping is functional and verify the specific operating parameters with the show ip igmp snooping command, as demonstrated in Example 2-13.

Example 2-13 Confirming IGMP Snooping Functionality and Parameters


Switch#show ip igmp snooping
Global IGMP Snooping configuration:
-------------------------------------------
IGMP snooping                : Enabled
IGMPv3 snooping (minimal)    : Enabled
Report suppression           : Enabled
TCN solicit query            : Disabled
TCN flood query count        : 2
Robustness variable          : 2
Last member query count      : 2
Last member query interval   : 1000


The output was truncated for brevity, but the IGMP snooping information will be displayed per VLAN.

The Process of Packet Replication in a Switch

Nearly all the protocols required to accomplish multicast forwarding are open standards, ratified by working groups like the IETF. However, the actual process of forwarding packets through a network device is not an open standard. The same is true of unicast packets. The way each manufacturer, or in some cases product lines, implement forwarding is what differentiates each platform from the next.

At the heart of IP multicast forwarding is the packet replication process. Packet replication is the process of making physical copies of a particular packet and sending a copy out any destination interfaces in the derived forwarding path.

What makes the process of replication so different from platform to platform is where the replication occurs. Each Cisco networking platform handles this process a little differently. Many routers use centralized processing to perform replication. Other more advanced routers and switches with distributed processing require specialized application-specific integrated circuits (ASICs) to perform packet replication and forwarding from one line-card to another. At the beginning of the Internet, the key objective of packet handling was to simply forward packets.

Packet forwarding at wire speed required the specialization of ASIC processing. As features and applications embedded on routers and switches grew (including critical components in modern networks like QoS, MPLS, multicast, SNMP, flow reporting, and so on) so did the need for packet replication and treatment at wire speed. Without ASICs, router processors would be overwhelmed by packet handling requirements. Cisco Systems spent over 25 years developing custom ASICs, many specifically for packet replication needs. With that understanding, the router manufacturer must make a choice about where and on which ASIC(s) to replicate packets. This is especially true in distributed routing and switching platforms. A distributed platform uses ASICs throughout the device to push forwarding decisions as close to the interface as possible.

Some distributed platforms may forward incoming multicast packets to the central processing card. That card may take special action on the packet, replicate the packet, or forward to other line-cards for replication. If replication occurs on the central processor, then the replication model used is centralized replication and acts as a traditional bus. In a centralized replication model, resource pressure occurs on the central processor and centralized resources like memory. Depending on the size of the multicast deployment or the number of packets requiring replication can result in serious performance problems for control plane traffic.

Other distributed platforms may use the ASICs associated with the inbound interface or line card to perform replication. This is known as ingress replication. In this case, the incoming interface line card replicates the multicast packet and sends copies via a fabric toward the exit interfaces in the path. Ingress replication distributes resource pressure across multiple processors and may still require occasional forwarding by the central processor depending on enabled features.

The ASICs associated with the exit interfaces can also perform replication. This, of course, is egress replication. In some instances, replicating only at the egress interface could mean an obvious loss in efficiency; however, exit line cards in many models terminate encapsulated multicast packets destined for certain domains. This means that the egress line card can be an ideal point of replication because that particular card might have many interfaces with subscribed receivers downstream.


Note

Downstream is the direction flowing away from the sender, toward the receiver. Fabric-facing ASICs on these cards will take on the role of replication.


Platforms may implement a combination of these replication methods, centralized, ingress, and egress. This is known as distributed replication. An incoming interface ASIC may perform one level of replication and send one packet copy to each line card with exit interfaces in the forwarding tree. The egress line cards can then create additional copies, one for each interface on that card in the path. This method further distributes resource pressure across as many ASICs as possible. Figure 2-11 represents a basic distributed replication model using replication on both the incoming line card and the outgoing line card. (This is a common model used in Cisco devices, for example the Catalyst 6500 switch line.)

Image

Figure 2-11 Packet Replication

The important thing to remember is that each manufacturer and each platform handles replication differently. To be truly competitive, each manufacturer must perform replication in a safe and efficient manner. That means the platform must forward as quickly as possible, while preventing loops and protecting precious control-plane resources. Any architect or engineer seeking to deploy IP multicast technologies should pay special attention to the replication process of each platform in the network path, as well as any specialized hardware and software feature enhancements.

Protecting Layer 2

IGMP snooping is a mechanism that we configure on a switch to minimize the impact of multicast traffic being directed to devices that are not interested in receiving it. This feature helps protect not only the infrastructure resources, but the devices that are attached to the network. Another feature that is well worth mentioning and will help to ensure the successful operation of your network is storm control.

Storm Control

Data storms in networks can be generated in several different ways, including an intentional denial of service (DoS) attack, a defective network interface card (NIC), a poorly programmed NIC driver, and so on. In order to prevent broadcast, multicast, or even unicast traffic from overwhelming a switch by an inordinate amount of traffic, the storm control feature offers the capability to set thresholds for these types of traffic on a per-port basis.

Configuration options are on a port basis and offer the capability to specify traffic based on the percentage of bandwidth, bits per second (BPS) or packets per second (PPS). If the threshold is reached, you can either send a Simple Network Management Protocol (SNMP) trap message or shut down the port by placing it in an error-disable state. The configuration parameters are as follows:

storm-control broadcast level <0.00 - 100.00> / bps / pps
storm-control multicast level <0.00 - 100.00> / bps / pps
storm-control unicast level <0.00 - 100.00> / bps / pps
storm-control action trap
storm-control action shutdown

In the following example, the switch will be configured to send a SNMP message when the broadcast level exceeds 50 percent:

Switch(config)#interface gigabitEthernet 0/2
Switch(config-if)#storm-control broadcast level 50
Switch(config-if)#storm-control action trap

The following is the SNMP message generated when the broadcast level has been exceeded:

%STORM_CONTROL-3-FILTERED: A Broadcast storm detected on Gi0/2. A packet filter
action has been applied on the interface.

You also have the ability to place the port in an error-disable state using the following command:

Switch(config-if)#storm-control action shutdown

The following output depicts the messages shown in the event of a port shutdown:

%PM-4-ERR_DISABLE: storm-control error detected on Gi0/2, putting Gi0/2 in
err-disable state
%STORM_CONTROL-3-SHUTDOWN: A packet storm was detected on Gi0/2. The interface has
been disabled.
%LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/2, changed state
to down
%LINK-3-UPDOWN: Interface GigabitEthernet0/2, changed state to down

We mentioned DoS attacks earlier in this section. When configuring the storm-control action shutdown command, you may have to manually enable the ports in the event the port is disabled. Using the errdisable recovery commands helps to mitigate that problem:

Switch(config)#errdisable recovery cause storm-control
Swtich(config)#errdisable recovery interval 30

The following output shows the logging message after recovery:

2d07h: %PM-4-ERR_RECOVER: Attempting to recover from storm-control err-disable
state on Gi0/2
2d07h: %LINK-3-UPDOWN: Interface GigabitEthernet0/2, changed state to up
2d07h: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/2, changed
state to up


Note

Use caution when setting storm levels because you may inadvertently create your own DoS attack and deny legitimate traffic.


Summary

The process of communication between devices on an IP network requires the handling or encapsulation of data at each layer of the OSI model. Packets are composed of MAC addresses, IP addresses, port numbers, and other necessary information. Multicast at Layer 2 has unique requirements regarding MAC addresses and the way IP addresses are mapped to them. In the mapping process, 5 bits of the IP address are overwritten by the OUI MAC address, which causes a 32-to-1 IP multicast address-to-multicast MAC address ambiguity. Client devices on the network use Internet Group Management Protocol (IGMP) to signal the intent to receive multicast streams and, in most cases, use IGMP to send leave messages. Modern switches have the capability to “snoop” or listen to IGMP messages and build appropriate forwarding tables. Timely delivery of messages is the most important role of the network and protecting those resources are critical to that function. Storm control can be used to aid in protecting network elements by limiting the types of traffic. Understanding the intricacies of how Layer 2 devices deliver multicast messages internally will help you in building an infrastructure to support your business initiatives.

References

Request for Comment (RFC) 1054: Host Extensions for IP Multicasting

RFC 1112: Internet Group Management Protocol, Version 1

RFC 2236: Internet Group Management Protocol, Version 2

RFC 3376: Internet Group Management Protocol, Version 3

RFC 4604: Internet Group Management Protocol, Version 3 and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast

RFC 2362: Protocol Independent Multicast-Sparse Mode (PIM-SM)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset