Chapter 5. IP Multicast Design Considerations and Implementation

In this chapter, we build on the material that has been introduced so far. We look at multicast group scoping and how multicast networks can be bounded to control the flow of information to provide security. We explain organizational and global group assignments and how to include address organization and schemas. Group scoping with hybrid designs and RP placement are examined, to include MSDP mesh groups and scoped multicast domains. We delve into traffic engineering, how the elements of Layer 3 devices make forwarding decisions, and how to manipulate those elements for path selection. IP multicast best practices and security are also covered concerning the network as a whole and the components that make up that network. Finally, we combine the elements we discuss in the chapter in a practical case study to solidify what you learned.

Multicast Group Scoping

With the depletion of public IPv4 addresses and the inability to obtain additional numbers from the Internet Assigned Numbers Authority (IANA), public IPv4 addresses are at a premium. Many technologies exist to make address management easier, including RFC 1918 and RFC 6598 private IP addresses and network address translation (NAT). These technologies impact the way we manage IP addresses internally. In addition, many routing protocols simply work better when address spaces can be summarized at particular boundaries. Thus, many organizations rely on an IP address schema to manage internal address assignments.

If you have ever administered an IPv4 or IPv6 network, you know that IP schemas are a very important part of network design and operation. An IP schema is essentially a map of how IP addresses are assigned and managed within the network. For example, the schema may prescribe very specific IP subnets for the network infrastructure while also making available other subnets for DHCP address assignments for end points and hosts. This is especially relevant for enterprises that may have limited access to public address space.

Many schemas use particular IP address octets to imply a specific meaning within the organization. For example, a network architect may assign the private network 172.16.0.0/16 to cover all network infrastructure address assignments. Administrators may break this block down further to provide additional control and meaning; for example, the routers in a given location may be assigned addresses from the 172.16.10.0/24 subnet, derived from the 172.16.0.0/16 supernet.

IPv4 multicast addresses are also a limited commodity. Organizations that roll-out multicast applications should create a detailed address schema. This schema helps control address assignment and assists in network operations. If the same IPv4 unicast schema principles are applied to the IPv4 multicast address schema, operations and design engineers can quickly identify applications and application properties derived through the assigned address.

Scoping is not just about which addresses to assign. Just like the underlying unicast network, multicast networks must be bounded in order to securely control the flow of information. In many cases, the boundary of the unicast autonomous system (AS) may coincide with the boundary of the multicast network, but this is not always the case. The scope of any IPv4 multicast domain should, at minimum, coincide with the scope of the unicast domain on which it is being overlain. Multiple multicast domains can overlay on a single unicast network, which can mean that multiple multicast scopes may be employed in the same unicast domain.

Some multicast boundaries occur naturally as part of the process of configuring the network. One obvious boundary is the one that exists between ASs. For unicast routing, the AS boundary is between the interior gateway protocol (IGP) and the exterior gateway protocol (EGP, most likely this will be BGP). Although route sharing may be configured between them, the external networks do not speak directly to IGP routers using the IGP protocol interface. For this reason, BGP routing information is often excluded from the processes of many internal overlay protocols, like Multiprotocol Label Switching (MPLS).

Multicast domains can use BGP routes for multicast RPF checks if they are using multicast BGP (MBGP), reviewed in this chapter. It is rare that all the necessary remote domain routes, like those of an internal multicast rendezvous point, are shared through native unicast BGP. It is assumed that these networks are internal only to the domain and therefore should be excluded by policy. It is also possible that the network may use different paths for external multicast and unicast flows. This can result in an incongruent network that causes RPF failures in the multicast path. Thus, for most multicast networks, unicast BGP still creates a natural boundary, in particular when it comes to RPF checking for loop-free paths. Properly scoping the multicast domain makes it significantly easier to summarize and secure the domain at the domain edge.

Organizational and Global Group Assignment Considerations

The public IPv4 multicast address blocks detailed in Chapter 1 are assigned by IANA and are not open for use by an organization for internal independent applications. As with publicly assigned unicast addresses, nothing prevents deployment of any public address internal to a network, but this could potentially cause serious conflicts with external-facing routers that have Internet routes. The same logic applies to IP multicast addresses. When an organization uses multicast privately, they should select addresses from the IPv4 administratively scoped address block.

Both of these blocks provide a tremendous number of possible addresses. For example, the administratively scoped IPv4 block is a /8, providing the application architect with the ability to select from 16,777,216 possible host group addresses for a given application. Very few, if any, networks will ever need this many addresses. Still, the selection of a group should be rigorously controlled by some entity with in the organization. Otherwise, group assignment conflicts can occur, and the groups themselves will have little meaning. It is best to create a group address schema to permanently address this within an organization.

The considerations necessary to create a multicast address schema are similar to those needed for a unicast schema. For example, summarization is just as important for multicast as it is for unicast. Even though each group address is routed as a single address (a /32, or having a mask of 255.255.255.255), it is best to further subdivide the administrative blocks by orderable bit boundaries that can take advantage of masking. Each contiguous sub-block of addresses can then represent a particular type of application, giving the address both meaning and additional scope. This makes security, routing, and other policies easier to implement.

There are several methods of determining the best addressing schema for an organization and several questions the architect must answer. These questions include:

Image What is the structure of the organization and how will each line of business use multicast applications?

Image What is the scale of the multicast deployment, including both planned and unplanned growth?

Image What organizational security policy exists for multicast deployments?

Image What is the geographical layout of the multicast-enabled network and what is the geographical scope of each application?

Image Where are the hosts and where are the sources in the geography?

Image What address ranges may overlap with Layer 2 MAC addresses?

Image What plans does the organization have for the use of source-specific multicast?

Image What is the ability of hosts, servers, and applications to support various address types?

Image What are the resource utilization parameters for multicast applications?

The answers to each of these questions may affect how the architect subdivides the group address space. For example, a security policy may dictate that some multicast applications only exist in a local data center, whereas other applications may have an organization-wide boundary. Another example could include dividing groups by the amount of resource consumption or by the business criticality of each application. Important or high-bandwidth groups can receive different treatment than other groups.

If the blocks are properly subdivided, then creating policy boundaries is a cinch. If not, each group will need individualized policy statements. The important element to remember is that no one schema is best for all organizations.

IPv4 Considerations

An IPv4 group address is 32 bits in length and, when written in dotted decimal, is split into 4 octets. The first octet of the private Administrative scope range is set at 239.0.0.0/8. Table 5-1 shows a simple schema created to separate a small service provider network into meaningful groups. The division begins with geography, followed by application priority, fulfilling some of the design concepts previously mentioned.

Image

Table 5-1 Group Scoping by Octet

Some organizations may have very large topologies that require additional complexity. One way to achieve this is to break down the schema further within each octet. Table 5-2 breaks down the geography octet into eight regions with up to eight Point of Presence (PoP) locations per region, and the priority octet into eight priorities, with eight resource consumption models (high bandwidth versus low bandwidth).

Image

Table 5-2 Group Scoping by Geography and Priority

Using the schema from Table 5-2, if the provider needed a group assignment for a core application or protocol that spanned all PoPs and has a priority/consumption of Infrastructure, then 239.17.0.X would suffice. The provider would also use 239.84.34.X (239.[0101][0100].[0010][0010].X) as an assignment for a high-bandwidth, high-priority application with a scope of PoP 3 in the South East region. The advantage of such a schema is that routers and firewalls can employ wildcard masks to manage policy statements in the network architecture.


Note

Routers use wildcard masks with IP access control lists (ACLs) to specify what should be matched for further action, depending on how an ACL is applied. Interface subnet masks read from left to right; for example, IP address 172.18.100.129 with a 255.255.255.224 mask. This provides an IP device the delimiting bits between subnet and host. Wildcard masks for IP ACLs reverse this structure; for example, mask 0.0.0.31 is the reverse of 255.255.255.224 (replacing ones with 0s). When the value of the mask is broken down into binary (0s and 1s), the results determine which address bits are to be considered in processing the traffic. A 0 in the mask means that the corresponding address bits must match exactly with the address for comparison. A 1 in the mask means the corresponding address bit is variable. Thus, an ACL statement with subnet 10.0.0.0 and mask 0.0.0.255 means that any address that begins with 10.0.0. will match the statement, because the last octet can be variable.


If the provider wanted to place boundaries on all groups within the Central region, a simple ACL using a network/mask entry of 239.121.0.0 0.15.255.255 could accomplish the task. Similarly, a network/mask entry of 239.0.4.0 0.255.251.255 matches any application deemed high resource consumptive. This schema also has the advantage of allowing for growth or additional scope constraints that may arise in the future.

This schema also has potentially serious drawbacks. Wildcard mask overlap might occur if certain subblocks of groups are needed to match a single ACL statement. Layer 2 MAC address overlap could become a serious issue as well. Additionally, the region, PoP, priority, and consumption model are not readily apparent in the address, and breakdown of the bits might be necessary to identify the application’s scope. A simpler schema may do more for human interaction but be more difficult to draw boundaries around. ACL based boundaries are applicable to the multicast data plane. Efficient control should be considered in the design for multicast control plane isolation. This will be covered in detail in this chapter.

The point is, any group schema should address the needs of the organization; there is no one-size-fits-all approach. If the multicast design overlays an existing multicast network, it may not be possible to change the schema without disruption; however, the value of such a workable schema is immeasurable in a large multicast deployment. Keep in mind: If only a few multicast applications are on the network, there is no need to make a large and complex schema like the one shown in Table 5-2. Instead, create a table and schema that has meaning and value for your specific multicast implementation.

Another IPv4 subblock consideration arises when using source-specific multicast (SSM). Remember that SSM can use both the 232/8 block for global and enterprise use as well as the 239.232/16 block private-only use. Administrators should never assign group space from the 232/8 block unless it is for SSM traffic. Many Layer 3 devices are preprogrammed to act on this block as SSM and will look to build SSM PIM trees accordingly.

It is also prudent when using SSM to subdivide the public and private SSM blocks further to give them scope and meaning (as with the preceding example schema). Using the 239.232/16 block for internal-only applications may provide fewer options for additional scope assignment, but it will still make bounding the groups easier. Table 5-3 shows a possible subdivision of the 239.232/16 private SSM subblock using the third octet to identify geographic scope.

Image

Table 5-3 Group Scoping by Octet Applied

In addition to creating an addressing schema that makes sense for your organization, all administrators should follow several basic rules. Some of these rules are certainly flexible, in that they can easily and thoughtlessly be broken. Care should be taken to design a schema that meets these rules. Doing so streamlines configurations, makes troubleshooting easier, and ensures that specific router features do not interfere with proper multicast operations:

Image Follow IANA’s addressing guidelines, especially using 239/8 addresses for internal applications. RFC 2365 describes the use of administratively scoped IP multicast addresses. This address range should be used for all internal applications. Again, this block is similar in concept to the use of RFC 1918 addresses for unicast.

Image Avoid using any group address with the x.0.0.x or x.128.0.x prefixes.

This rule should be somewhat obvious because the 224.0.0.X range encompasses link-local applications. Using an address in this range could interfere with critical network control traffic that uses multicast, such as, for example, EIGRP or OSPF. Let these addresses remain reserved as per the intention of IANA. Routers and switches, including IGMP snooping functions, will be unable to distinguish between the addresses. Furthermore, consider the 32:1 overlap of IP multicast addresses to Ethernet MAC addresses. This means you should avoid any multicast address in the [224-239].0.0.x and [224-239].128.0.x ranges. As an example, notice that the schema in Table 5-3 eliminates this problem by requiring the first elements to begin with bits 0001 and not 0000.

Image Always use the 232/8 block for SSM applications, including inter-domain one-to-many applications. RFC 4608 describes the use of the 232/8 address range for PIM-SSM interdomain applications.

Image Petition IANA for a publically recognized address from the 224 address range for any public-facing applications, but only if the application is truly public. Content providers that need to ensure against an address collision with any other provider or customer on a global scale should consider this block.

Using Group Scoping for Hybrid Designs and RP Placement

We reviewed the different RP design modes in Chapter 4. The key considerations for RP redundancy for any source multicast design are as follows:

1. High-Availability mode: Active/Active or Active/Standby options:

Image We are aware that Anycast fits the bill for Active/Active mode.

Image Active/Standby mode is supported by Auto-RP and BSR.

2. Scoping requirement: RP domains and multicast address scheme to scope regions for multicast:

Image Scoping requirements need to be reviewed with applications aligned to the scope region. A local scope will require the source locally assigned to the region and appropriate control methods need to be determined for the local application not being transported across WAN infrastructure. Application containment within a scope can be used to limit bandwidth or local application dependency. Adding multiple local scopes may also increase the administrative overhead; the choice of local scope should be aligned to the outcome and to the benefits to the network infrastructure.

Image Care should be taken to maintain the scopes to manageable limits permitted by the application.

Image Multicast address group selection with an RP for each local scope should be considered.

3. Downstream propagation: Dynamic or static propagation:

The propagation method for an RP should be aligned to the multicast scope addresses. The static propagation method is to add a static RP address with an associated scope at every downstream router. This is a painstaking administrative task. Using a dynamic propagation method is preferred because the configuration for RP and ACL can be done only at the RP responsible for the scope.

Table 5-4 explains the mapping of design features to the RP propagation scheme covered in Chapter 4:

Image

Table 5-4 Comparison of RP Distribution Methods

As shown in Table 5-4, the actual choice for an enterprise RP design for ASM is not available by any of the known methods today. The best choice for an Architect would be to have an active/active implementation, which takes care of scoping and dynamic failover. (A score of 3/3 meets all the three requirements—Active/Active for high availability, supports scoping, and supports dynamic propagation.). This is possible by using the hybrid design. Yes, there is a hybrid design for an RP that leverages multiple protocols to achieve a desired effect for enterprise-scoped multicast design. Table 5-5 outlines the mix of protocols to achieve this design state.

Image

Table 5-5 Hybrid Design Comparison

This hybrid RP design is achieved by using Anycast to establish RP state information and Auto-RP for propagation of the RP information aligned to scope ranges to downstream routers. Please see Figure 5-1 to understand the function of the hybrid design.

Image

Figure 5-1 Hybrid RP Design

In the diagram, RP1 and RP2 act as the RP for the entire enterprise domain. The RP information is maintained by an Anycast MSDP relationship built between RP1 (10.2.1.1) and RP2 (10.2.1.2). The candidate RP (10.1.1.1) is used as the Auto-RP candidate. Auto-RP uses 10.1.1.1 as the elected candidate RP address, because RP1 and RP2 both advertise the same candidate RP address. Auto-RP ties the multicast scoping access-list at the RP; this ensures the downstream router receives the RP information with the ACL list attached to the RP scope range dynamically. Example 5-1 provides a sample configuration.

Example 5-1 Hybrid Design Configuration: Anycast RP with Auto-RP


RP1
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet0/0
 ip address 192.168.1.1 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 192.168.2.1 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.1
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.2 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.2
!
access-list 1 permit 239.1.0.0 0.0.255.255


RP2
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.2 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet0/0
 ip address 192.168.1.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 192.168.3.1 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.2
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.1 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.1
!
access-list 1 permit 239.1.0.0 0.0.255.255


Example 5-2 shows the configuration for the downstream router.

Example 5-2 Hybrid Design: Downstream Router Configuration


ip multicast-routing
ip cef
!
!
interface Loopback0
 ip address 10.2.1.3 255.255.255.255
!
interface Ethernet1/0
 ip address 192.168.2.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 192.168.3.2 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
!
ip pim autorp listener


Example 5-3 shows the RP mapping command output at the downstream router.

Example 5-3 Hybrid Design: Downstream RP Mapping


R3# show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 239.1.0.0/16 ←  Scoped masked range configured using ACL 1 applied to
  the candidate RP configuration
  RP 10.1.1.1 (?), v2v1

    Info source: 10.1.1.1 (?), elected via Auto-RP <- Anycast RP 10.1.1.1 is
      propagated via Autp-RP
         Uptime: 03:21:32, expires: 00:00:24
R3#
Anycast MSDP relationship at the RP


R2# show ip msdp  summary
MSDP Peer Status Summary
Peer Address     AS    State    Uptime/  Reset SA    Peer Name
                                Downtime Count Count
*10.2.1.1        ?     Connect  00:13:45 0     0     ?
R2#


Multicast RP Design with MSDP Mesh Group

We previously discussed the concept of Anycast MSDP based on RFC 3618. The implementation included a default MSDP peer to create an MSDP Anycast relationship between two RPs to create an active/active solution. When you are faced with three regions and have to create an enterprise wide scope between them, using a default peer will not scale because you will only have two RPs in an active/active mode. For larger scale implementations, Anycast MSDP mesh groups are used.

Anycast MSDP mesh groups operate in the following way: When the RP for a domain receives an SA message from an MSDP peer, the RP verifies the receiver join requests for the group the SA message describes. If the (*,G) entry exists, the RP triggers an (S,G) join toward the source. After the (S,G) join reaches the source DR, a branch of the source tree is built from the source to the RP in the remote domain. If an MSDP peer receives the same SA message from a non-RPF peer toward the originating RP, it drops the message.

Figure 5-2 explains the functionality for Anycast mesh groups.

Image

Figure 5-2 Anycast Mesh Group

Three regions are represented in Figure 5-2. Each region has local sources that have global receivers and also receivers who participate in enterprise wide multicast streams. To create an active/active RP model localized to each region participating in the enterprise multicast domain, we leverage the same hybrid design concept, but with mesh groups. This provides active/active RP distribution for each region. The design allows localization of local multicast sources and receivers from state maintenance across the WAN. Example 5-4 demonstrates the configuration.

Example 5-4 Anycast Mesh Group Configuration


RP1

interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.1 255.255.255.255
!
interface Ethernet0/0
 ip address 192.168.1.1 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 192.168.2.1 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.1
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.2 connect-source Loopback1
ip msdp peer 10.2.1.3 connect-source Loopback1
ip msdp cache-sa-state
ip msdp originator-id Loopback1
ip msdp mesh-group ENT 10.2.1.2
ip msdp mesh-group ENT 10.2.1.3
!
access-list 1 permit 239.1.0.0 0.0.255.255


RP2

ip multicast-routing
ip cef
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.2 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet0/0
 ip address 192.168.1.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 192.168.3.1 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.2
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.1 connect-source Loopback1
ip msdp peer 10.2.1.3 connect-source Loopback1
ip msdp cache-sa-state
ip msdp originator-id Loopback1
ip msdp mesh-group ENT 10.2.1.1
ip msdp mesh-group ENT 10.2.1.3
!
access-list 1 permit 239.1.0.0 0.0.255.255


RP3

ip multicast-routing
ip cef
!
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.3 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 192.168.2.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 192.168.3.2 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
!
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.1 connect-source Loopback1
ip msdp peer 10.2.1.2 connect-source Loopback1
ip msdp cache-sa-state
ip msdp originator-id Loopback1
ip msdp mesh-group ENT 10.2.1.1
ip msdp mesh-group ENT 10.2.1.2
!
access-list 1 permit 239.1.0.0 0.0.255.255


Example 5-5 demonstrates a functioning solution.

Example 5-5 Anycast Mesh Group: RP Mapping and MSDP Summary


r3#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
This system is an RP-mapping agent (Loopback0)

Group(s) 239.1.0.0/16
  RP 10.1.1.1 (?), v2v1
    Info source: 10.1.1.1 (?), elected via Auto-RP
         Uptime: 00:17:44, expires: 00:00:25

r3#show ip msdp summary
MSDP Peer Status Summary
Peer Address     AS    State    Uptime/  Reset SA    Peer Name
                                Downtime Count Count
10.2.1.1         ?     Up       00:27:17 0     0     ?
10.2.1.2         ?     Up       00:27:26 0     0     ?


Multicast RP Hybrid Design with Scoped Multicast Domains

You learned about the importance of multicast scoping in Chapter 3 and earlier in this chapter. It is very simple to overlay the hybrid RP design on the top of the multicast scoped design. Initially, you must review the local multicast groups that need to participate in the campus or branch domain. Then consider the requirements for enterprise-wide applications. Now, align these applications with the multicast IPv4 addressing scheme. When this is complete, use the hybrid RP design to address the active/active control plane. In Figure 5-3, we review the enterprise-wide design and overlay the RP control-plane design.

Image

Figure 5-3 Enterprise Multicast Scoped Domains

In this example, the multicast application requirement is for enterprise-wide webcasting, local desktop imaging, and campus security camera multicast video. It is simple to categorize the multicast into two groups, enterprise-wide and campus. To optimize data transport and control plane for multicast, the campus source is scoped as a separate multicast domain. The multicast addressing scheme for the campus is planned accordingly. The RP selection has to be aligned to the multicast domain, as shown in Figure 5-3. A global RP is selected with an enterprise-wide scope, and a local RP is selected with a scope for the local campus. In addition, the enterprise wide RP scope will cover the campus. The downstream routers at the campus location will learn about the two RPs, one for the enterprise-wide scope with the multicast address range of 239.1.0.0/16 and one for the local campus scope with the address of 239.192.0.0/16. Multicast best practices are covered in later sections of this chapter.

Using the same hybrid design methodology for the RP, Example 5-6 shows the configuration for the global RP.

Example 5-6 Enterprise Scoped Domain: Global RP Configuration


G_RP1
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.1 255.255.255.255
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.1
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.2 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.2
!
access-list 1 permit 239.1.0.0 0.0.255.255


G_RP2
ip multicast-routing
ip cef
!
interface Loopback0
 ip address 10.1.1.1 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.2 255.255.255.255
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.2
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.1 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.1
!
access-list 1 permit 239.1.0.0 0.0.255.255


Example 5-7 shows the configuration for the local RP.

Example 5-7 Enterprise Scoped Domain: Local RP Configuration


L_RP1
ip multicast-routing
ip cef
!
interface Loopback0
! description- this loopback should be !unique to each campus !for multicast local !domain
 ip address 10.1.1.10 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.10 255.255.255.255
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.10
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.20 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.20
!
access-list 1 permit 239.192.0.0 0.0.255.255


L_RP2
ip multicast-routing
ip cef
!
interface Loopback0
! description- this loopback should be !unique to each campus !for multicast local !domain
 ip address 10.1.1.10 255.255.255.255
 ip pim sparse-mode
!
interface Loopback1
 ip address 10.2.1.20 255.255.255.255
 ip pim sparse-mode
!
router eigrp 1
 network 0.0.0.0
 eigrp router-id 10.2.1.20
!
ip pim autorp listener
ip pim send-rp-announce Loopback0 scope 20 group-list 1 interval 10
ip pim send-rp-discovery Loopback0 scope 20 interval 10
ip msdp peer 10.2.1.10 connect-source Loopback1
ip msdp cache-sa-state
ip msdp default-peer 10.2.1.10
!
access-list 1 permit 239.192.0.0 0.0.255.255


Using this configuration, we have achieved an active/active RP implementation in a scoped multicast environment that addresses the global and local scopes.

The downstream router in the campus will be part of two multicast RP domains, and the RP cache will appear as shown in Example 5-8.

Example 5-8 Enterprise Scoped Domain: Campus RP Mapping


R2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 239.1.0.0/16
  RP 10.1.1.1 (?), v2v1
    Info source: 10.1.1.1 (?), elected via Auto-RP
         Uptime: 00:07:43, expires: 00:00:07
Group(s) 239.192.0.0/16
  RP 10.1.1.10 (?), v2v1
    Info source: 10.1.1.10 (?), elected via Auto-RP
         Uptime: 00:07:43, expires: 00:00:07


The branch in this example only participates in the enterprise scope, as shown in Example 5-9.

Example 5-9 Enterprise Scope Domain: Branch RP Mapping


R3#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 239.1.0.0/16
  RP 10.1.1.1 (?), v2v1
    Info source: 10.1.1.1 (?), elected via Auto-RP
         Uptime: 00:07:43, expires: 00:00:07


RP Placement

RP placement is another key aspect in the multicast design. The details that need to be considered are as follows:

Image The RP should be aligned to the multicast scope.

Image Prefer placing the RP close to the source if possible. This is only applicable to a few sources that are of key importance to the business. Enterprise-wide deployments for multicast normally use RPs in the data center for the enterprise scope.

Image Localization of the RP for local domains reduces the control plane state across the WAN. This is applicable when we have a MPLS-based service provider circuit in which the number of multicast states in the WAN is governed by a contractual agreement.

Image If the number of states in the control plane is between 20 and 50, then another functional device such as a core switch or a WAN router can be used. Normally, it is not a mandate to have a separate RP; however, if the number of states is more than 100, a separate RP should be considered at least for the enterprise global scope.

Multicast Traffic Engineering and Forwarding

An ideal IP multicast network overlay matches the IP unicast underlay in a complimentary way. Where PIM is concerned, it is simply easier to make forwarding decisions if the underlying unicast forwarding paths are congruent and ultimately match the unicast forwarding paths. This type of implementation offers the benefits of low management and operational overhead. Consider Figure 5-4.

Image

Figure 5-4 Simple PIM Domain

Figure 5-4 shows a multicast domain in which the underlying IP paths are all unique and simple. If PIM is enabled on all links, as shown, multicast traffic can simply follow the network paths between source and receivers as dictated by the unicast routing table. It is clean and simple, and it is definitely a desirable design goal.

However, anyone who understands networking knows that this type of uniformity or simplicity is not always reality. Sometimes we have to make very specific changes to the multicast overlay to achieve certain desirable forwarding results. Typically, when we want IP traffic forwarding to conform to a method designed to improve some specific operational aspect of the network, we call this traffic engineering. Examples of traffic engineering include sending traffic over multiple load-balancing paths or specific paths with specific characteristics. Let’s explore multicast traffic engineering a little further by first looking closer at the multicast state maintenance and forwarding mechanics.

More on mRIB, mFIB, and RPF Checks

A router (L3 device) interface is assigned an IP address from a subnet; this represents the final physical location of all hosts on a given segment. Reaching a host on a physical segment requires forwarding packets toward the destination router. IP routing protocols (such as static routing, OSPF, EIGRP, RIP, or BGP) either manually or dynamically learn the physical paths toward all networks. The L3 device uses the learned combined address and path information to create a forwarding table and decision tree. There is a table of all learned network addresses and associated physical paths with route ranking information, and a subsequent table that indicates which L3 interfaces the router has chosen to forward toward the destination.

Hierarchically speaking, we refer to these two separate tables as the router information base (RIB) and the forwarding information base (FIB). The router populates the RIB by pulling routing information from tables built by the routing protocols. The IP routing table would be the RIP/ISIS/OSPF databases, the EIGRP topology table, or the BGP table. The router then derives the forwarding tree or FIB for each packet from the RIB.

There is a common-sense reason for the separation of these two table types. A router may run many protocols, and each protocol may record several paths toward an IP destination. The router first selects the best path(s) from the protocol tables and then ranks each protocol. The RIB consists of only the best route(s) from the most trusted protocol. This happens at the control-plane layer of the router. To forward packets, the router must make another recursive decision. The router must relate the appropriate interface(s) to the route. This is where the FIB comes into play. The FIB is used to make forwarding decisions or packet manipulation decisions. The use of application-specific integrated circuits (ASIC) may allow this function to be conducted in hardware, which will improve the throughput of the device. The FIB is a function of the forwarding or data plane of the router. Figure 5-5 illustrates the process of building forwarding decision tables. Separating the control plane from the data plane makes the topology more robust and resilient, allowing the control plane to make on-the-fly changes or corrections without affecting packet-forwarding until changes are confirmed.

Image

Figure 5-5 RIB and FIB Population

In the traditional Cisco routing environment, the show ip route command reveals the RIB. The output in Example 5-10 shows the RIB for a small three-router network. You can see there are routes learned from OSPF, RIP, static, and from connected networks.

Example 5-10 Basic IOS Unicast RIB: show ip route


ASR1K-2#show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
      o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
      + - replicated route, % - next hop override

Gateway of last resort is not set

R     10.0.0.0/8 [120/2] via 192.168.2.1, 00:00:15, GigabitEthernet0/0/0
      172.16.0.0/24 is subnetted, 1 subnets
S        172.16.1.0 is directly connected, GigabitEthernet0/0/0
      192.168.0.0/24 is variably subnetted, 3 subnets, 2 masks
R        192.168.0.0/24
           [120/2] via 192.168.2.1, 00:00:15, GigabitEthernet0/0/0
O        192.168.0.2/32
           [110/3] via 192.168.2.1, 00:15:51, GigabitEthernet0/0/0
C        192.168.0.3/32 is directly connected, Loopback0
O   192.168.1.0/24 [110/2] via 192.168.2.1, 00:16:01, GigabitEthernet0/0/0
      192.168.2.0/24 is variably subnetted, 2 subnets, 2 masks
C       192.168.2.0/24 is directly connected, GigabitEthernet0/0/0
L        192.168.2.2/32 is directly connected, GigabitEthernet0/0/0
O   192.168.3.0/24 [110/3] via 192.168.2.1, 00:15:51, GigabitEthernet0/0/0
      192.168.4.0/24 is variably subnetted, 2 subnets, 2 masks
C       192.168.4.0/24 is directly connected, GigabitEthernet0/0/3
L        192.168.4.1/32 is directly connected, GigabitEthernet0/0/3


Notice that the RIB table contains information from multiple protocols (RIP, static, and OSPF). The data plane does not need to know or understand the source of the routing information. It only needs to derive the best forwarding path for each known IP destination network or subnet. For those familiar with Cisco Express Forwarding (CEF), the show ip cef command displays the FIB at the data plane of most IOS routers as demonstrated in Example 5-11. This CEF table was derived from the above RIB.

Example 5-11 Basic IOS Unicast FIB: show ip cef


ASR1K-2#show ip cef
Prefix               Next Hop             Interface
10.0.0.0/8           192.168.2.1          GigabitEthernet0/0/0
127.0.0.0/8          drop
172.16.1.0/24        attached             GigabitEthernet0/0/0
192.168.0.0/24       192.168.2.1          GigabitEthernet0/0/0
192.168.0.2/32       192.168.2.1          GigabitEthernet0/0/0
192.168.0.3/32       receive              Loopback0
192.168.1.0/24       192.168.2.1          GigabitEthernet0/0/0
192.168.2.0/24       attached             GigabitEthernet0/0/0
192.168.2.0/32       receive              GigabitEthernet0/0/0
192.168.2.1/32       attached             GigabitEthernet0/0/0
192.168.2.2/32       receive              GigabitEthernet0/0/0
192.168.2.255/32     receive              GigabitEthernet0/0/0
192.168.3.0/24       192.168.2.1          GigabitEthernet0/0/0
192.168.4.0/24       attached             GigabitEthernet0/0/3
192.168.4.0/32       receive              GigabitEthernet0/0/3
192.168.4.1/32       receive              GigabitEthernet0/0/3


IP multicasting does not change basic unicast RIB information in any way. The process of unicast RIB derivation and population is the same for unicast, broadcast, and multicast routes. Routers also forward multicast packets from the source toward receivers based on the information learned from the RIB. An inherent danger exists in multicast-forwarding that does not exist in unicast and broadcast packet-forwarding.

When a network device receives a unicast or a broadcast packet, only a single copy of that packet exists because it transits Layer 3 interfaces. Broadcasts may have many intended recipients, but a Layer 3 interface does not make additional copies to send to host interfaces. That is the job of the Layer 2 switch. Layer 2 switches copy each broadcast frame and flood the copies out each interface in the associated Layer 2 domain; this is known as packet replication.

IP multicast packets come from a single source but are forwarded toward many Layer 3 destinations. It is very common to have both physical and logical redundancy in Layer 3 networks. Routers also do not have any inherent way of telling whether a packet is an original or a copy. Consequently, a Layer 3 router must make an important decision: It must choose which interfaces must forward a copy of the packet without creating a forwarding loop. The router must therefore have a way to determine which network paths multicast sources are sending from, where subscribed receivers are located, and which interfaces are in the path toward those receivers. This is further complicated by the fact that multicast receivers subscribe only to specific groups. Routers must also have a way to learn and share information about the groups that have current subscribers, the specific subscribers, and the sources generating multicast packets.

PIM is the most widely deployed multicast routing protocol; however, the term multicast routing protocol confuses many engineers. PIM does not learn and share routing information. PIM does not change, manipulate, or insert information into the unicast RIB of a router. The primary concern of the multicast routing protocol is to ensure loop-free forwarding over the existing IP network, acting as a control-plane overlay. This is why a router must maintain a separate multicast RIB and multicast FIB (mRIB and mFIB) specific to multicast packets. Routers must also populate multicast RIB and FIB tables using a combination of information from the unicast RIB and the learned source, group, and receiver information, using RPF checks to determine loop-free forwarding. Refer to the diagram in Figure 5-6 for a visual illustration of this process.

Image

Figure 5-6 mRIB and mFIB Population

The show ip mroute command in Cisco IOS reveals the multicast RIB. The show ip mroute output in Example 5-12 shows the multicast RIB, the same router whose RIB and FIB were previously examined.

Example 5-12 Basic IOS MRIB: show ip mroute


ASR1K-2#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:01:12/stopped, RP 192.168.0.1, flags: SJCL
  Incoming interface: GigabitEthernet0/0/0, RPF nbr 192.168.2.1
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:01:12/00:02:12

(192.168.1.2, 239.1.1.1), 00:00:07/00:02:52, flags: LJT
  Incoming interface: GigabitEthernet0/0/0, RPF nbr 192.168.2.1
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:00:07/00:02:52

(192.168.0.1, 239.1.1.1), 00:00:19/00:02:40, flags: LJT
  Incoming interface: GigabitEthernet0/0/0, RPF nbr 192.168.2.1
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:00:19/00:02:40

(*, 224.0.1.40), 00:01:12/00:02:59, RP 192.168.0.1, flags: SJCL
  Incoming interface: GigabitEthernet0/0/0, RPF nbr 192.168.2.1
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 00:01:12/00:02:10
    GigabitEthernet0/0/3, Forward/Sparse-Dense, 00:01:12/00:02:59


If the device is using CEF, the multicast FIB is integrated into the CEF table and appears as follows:

ASR1K-2#show ip cef 239.1.1.1
224.0.0.0/4
  multicast

This particular CEF output may not be very helpful. More advanced and modular operating systems like Cisco IOS-XR process the multicast RIB and FIB more independently. The output from the commands show mrib route and show mfib route executed on a router running IOS-XR, as demonstrated in Example 5-13, shows the distinction between RIB and FIB more acutely.

Example 5-13 IOS-XR MRIB and MFIB: show mrib/mfib route


RP/0/RSP0/CPU0:A9K#show mrib route
IP Multicast Routing Information Base
Entry flags: L - Domain-Local Source, E - External Source to the Domain,
    C - Directly-Connected Check, S - Signal, IA - Inherit Accept,
    IF - Inherit From, D - Drop, ME - MDT Encap, EID - Encap ID,
    MD - MDT Decap, MT - MDT Threshold Crossed, MH - MDT interface handle
    CD - Conditional Decap, MPLS - MPLS Decap, MF - MPLS Encap, EX - Extranet
    MoFE - MoFRR Enabled, MoFS - MoFRR State, MoFP - MoFRR Primary
    MoFB - MoFRR Backup, RPFID - RPF ID Set
Interface flags: F - Forward, A - Accept, IC - Internal Copy,
    NS - Negate Signal, DP - Don't Preserve, SP - Signal Present,
    II - Internal Interest, ID - Internal Disinterest, LI - Local Interest,
    LD - Local Disinterest, DI - Decapsulation Interface
    EI - Encapsulation Interface, MI - MDT Interface, LVIF - MPLS Encap,
    EX - Extranet, A2 - Secondary Accept, MT - MDT Threshold Crossed,
    MA - Data MDT Assigned, LMI - mLDP MDT Interface, TMI - P2MP-TE MDT Interface
    IRMI - IR MDT Interface

(*,224.0.0.0/4) RPF nbr: 192.168.0.1 Flags: L C P
  Up: 00:06:38
  Outgoing Interface List
    Decapstunnel0 Flags: NS DI, Up: 00:06:38

(*,239.1.1.1) RPF nbr: 192.168.0.1 Flags: C
  Up: 00:03:19
  Incoming Interface List
    Decapstunnel0 Flags: A, Up: 00:03:19
  Outgoing Interface List
    GigabitEthernet0/1/0/1 Flags: F NS LI, Up: 00:03:19

(192.168.0.1,239.1.1.1) RPF nbr: 192.168.0.1 Flags: L
  Up: 00:01:05
  Incoming Interface List
    Loopback0 Flags: A, Up: 00:01:05
  Outgoing Interface List
    GigabitEthernet0/1/0/1 Flags: F NS, Up: 00:01:05

(192.168.1.2,239.1.1.1) RPF nbr: 192.168.1.2 Flags:
  Up: 00:00:57
  Incoming Interface List
    GigabitEthernet0/1/0/0 Flags: A, Up: 00:00:57
  Outgoing Interface List
    GigabitEthernet0/1/0/1 Flags: F NS, Up: 00:00:57
(192.168.2.2,239.1.1.1) RPF nbr: 192.168.2.2 Flags:
  Up: 00:02:29
  Incoming Interface List
    GigabitEthernet0/1/0/1 Flags: F A, Up: 00:01:58
  Outgoing Interface List
    GigabitEthernet0/1/0/1 Flags: F A, Up: 00:01:58

RP/0/RSP0/CPU0:A9K#show mfib route
IP Multicast Forwarding Information Base
Entry flags: C - Directly-Connected Check, S - Signal, D - Drop,
  IA - Inherit Accept, IF - Inherit From, EID - Encap ID,
  ME - MDT Encap, MD - MDT Decap, MT - MDT Threshold Crossed,
  MH - MDT interface handle, CD - Conditional Decap,
  DT - MDT Decap True, EX - Extranet, RPFID - RPF ID Set,
  MoFE - MoFRR Enabled, MoFS - MoFRR State
Interface flags: F - Forward, A - Accept, IC - Internal Copy,
  NS - Negate Signal, DP - Don't Preserve, SP - Signal Present,
  EG - Egress, EI - Encapsulation Interface, MI - MDT Interface,
  EX - Extranet, A2 - Secondary Accept
Forwarding/Replication Counts: Packets in/Packets out/Bytes out
Failure Counts: RPF / TTL / Empty Olist / Encap RL / Other

(*,224.0.0.0/4),   Flags:  C

  Up: 00:07:02
  Last Used: never
  SW Forwarding Counts: 0/0/0
  SW Replication Counts: 0/0/0
  SW Failure Counts: 0/0/0/0/0
  Decapstunnel0 Flags:  NS, Up:00:07:02

(*,239.1.1.1),   Flags:  C

  Up: 00:03:43
  Last Used: 00:01:29
  SW Forwarding Counts: 1/0/0
  SW Replication Counts: 1/0/0
  SW Failure Counts: 0/0/0/0/0
  Decapstunnel0 Flags:  A, Up:00:03:43
  GigabitEthernet0/1/0/1 Flags:  NS, Up:00:02:23

(192.168.0.1,239.1.1.1),   Flags:

  Up: 00:01:29
  Last Used: 00:01:29
  SW Forwarding Counts: 1/1/100
  SW Replication Counts: 1/0/0
  SW Failure Counts: 0/0/0/0/0
  Loopback0 Flags:  A, Up:00:01:29
  GigabitEthernet0/1/0/1 Flags:  NS, Up:00:01:29

(192.168.1.2,239.1.1.1),   Flags:

  Up: 00:01:21
  Last Used: never
  SW Forwarding Counts: 0/0/0
  SW Replication Counts: 0/0/0
  SW Failure Counts: 0/0/0/0/0
  GigabitEthernet0/1/0/0 Flags:  A, Up:00:01:21
  GigabitEthernet0/1/0/1 Flags:  NS, Up:00:01:21

(192.168.2.2,239.1.1.1),   Flags:

  Up: 00:02:53
  Last Used: never
  SW Forwarding Counts: 0/0/0
  SW Replication Counts: 0/0/0
  SW Failure Counts: 0/0/0/0/0
  GigabitEthernet0/1/0/1 Flags:  A, Up:00:02:23


As you can see from the output, the multicast RIB is a table of the sources and groups the router is currently receiving updates for, from multicast routing protocols like PIM and the host subscription protocol Internetwork Group Message Protocol (IGMP). As per the process, the list of sources and receivers is compared against the unicast routing table, checking for exit interfaces and ensuring loop-free packet delivery using the unicast reverse path.

Even though IP multicast is inherent in the IP stack, multicast overlays are analogous to any other overlay network, like MPLS, VPNs, GRE, and so on. Each of these protocols also creates and maintains additional forwarding information and requires an underlying IP network that is complete and converged. More specifically, the RIB and FIB of the underlying network are the foundation of any forwarding decision made by the Layer 3 device.

Finally, as previously mentioned in this and earlier chapters, unicast reverse path forwarding (RPF) checking is the way routers ensure loop-free multicast forwarding. Let us quickly review the RPF check process: When a multicast packet is received, the router looks up the unicast route toward the source in the packet header. If the multicast packet entered into the router through an interface that is in the preferred destination path (derived from the unicast RIB) for the source, the packet is deemed trustworthy and the router can add the (S,G) to the MRIB table. This is a reverse check, and the router records the list of incoming interface(s) of the source packet. That does not mean the router forwards the multicast packet or adds the entry to the MRIB. Additional information is still required. In order for the router to add the (S,G) entry to the table, an existing (*,G) entry must be in the MRIB, applicable to Any Source Multicast (ASM). In other words, a downstream source must have previously expressed interest in the group; otherwise, the router simply prunes the packets, preventing the multicast stream from flooding the network unnecessarily.

When a router receives an IGMP subscription packet or multicast route update (PIM Join) for a (*,G), the same RPF check process is followed. Remember, however, that the (*,G) represents a shared tree. The source is not included in the update, making the root of the tree the RP specified for the group in the router group to RP mapping. Thus, in the case of a (*,G) update from either PIM or IGMP, the router RPF checks the forwarding tree against the unicast path toward the RP.

The router builds a list of interfaces that are downstream from the router with interested hosts, the outgoing interface list (OIL). The OIL in the (*,G) entry represents all the interfaces to which a multicast packet for the group specified requires packet replication. Now that the router has both an (S,G) and a (*,G) entry for a particular group, it is ready to replicate and forward packets from the sources listed in the incoming list toward the receivers listed in the outgoing interface list. In most cases, routers forward multicast packets, obeying split-horizon rules. The packet must come from an incoming interface and will be replicated and forwarded down only those interfaces in the OIL. As you can see, RPF checking is used to govern entries in both the control plane and the forwarding plane of every multicast router. If any packet or any updated (S,G) or (*,G) fails the RPF check, the packet is not forwarded and the entry is removed from the mRIB and mFIB.

Traffic Engineering Using IP Multipath Feature

We have discussed at length the relationship between the unicast RIB, the mRIB, and RPF checks. But what happens when the unicast RIB has multiple equal-cost path entries for a source, the RP, or receivers for a given group? Consider the network in Figure 5-7.

Image

Figure 5-7 Multiple IP Paths

In this very simple network diagram, four equal-cost EIGRP paths lie between the two multicast routers. The network is purposefully designed to utilize all four paths to maximize efficiency. With unicast routing, this is referred to as equal-cost multi-path (ECMP). The default behavior of PIM states that we can only have one RPF neighbor interface in the multicast state table for (*,G) and (S,G) entries. By default, PIM uses the following rule to declare which interface is the appropriate RPF interface:

The RPF interface of a (S,G) entry is the interface with the lowest cost path (administrative distance, or if from the same protocol, the routing metric) to the IP address of the source. The RPF interface of a (*,G) entry is the lowest cost path to the IP address of the RP. If multiple paths exist and have the same cost, the interface with the highest IP address is chosen as the tiebreaker.

It is possible to change this default behavior to allow load-splitting between two or more paths, if required, and there may be many good reasons to do so. Configuring load-splitting with the ip multicast multipath command causes the system to load-split multicast traffic across multiple equal-cost paths based on source address using the S-hash algorithm. This feature load-splits the traffic and does not load-balance the traffic. Based on the S-hash algorithm, the multicast stream from a source uses only one path. The PIM joins will be distributed over the different ECMP links based on a hash of the source address. This enables streams to be divided across different network paths. The S-hash method can be used to achieve a diverse path for multicast data flow that is split between two multicast groups to achieve redundancy in transport of the real-time packets. The redundant flow for the same data stream is achieved using an intelligent application that can encapsulate the same data in two separate multicast streams. These applications are often seen in financial networks. This feature is leveraged from the network side to achieve redundancy. By using this feature, the network availability increases the overall resiliency because now a single failure in the network could potentially affect only 50% of the traffic streams. Furthermore, if you have an intelligent application that provides redundancy to the same stream by encapsulating in two multicast addresses, then delivery of the data across the network is guaranteed based on the IP multicast multipath feature. Things to consider while using this feature in a design are:

Image Multicast traffic from different sources are load-split across the different equal-cost paths.

Image Load-splitting does not occur across equal-cost paths for multicast traffic from the same source sent to different multicast groups.


Note

The multipath hashing algorithm is similar to other load-splitting algorithms in that it is unlikely that true 50-50 load-splitting will ever occur. It is possible that two unique flows (source to receivers) could be hashed to forward down the same link, but a single stream from a single source will not be hashed over more than one link. Table 5-6 delineates the different hash options.


Table 5-6 provides the basic syntax for enabling the feature in IOS and IOS-XE systems.

Image

Table 5-6 IOS Multipath Command Comparison

The topology reviewed in Figure 5-7 provides a use case for multicast multipath. Consider Figure 5-8, which adds a multicast PIM domain configuration.

Image

Figure 5-8 Multiple IP Multicast Paths

This configuration is shown in Example 5-14. R1, R2, R3, and R4 represent a multicast domain with multiple redundant links. There is a desire to split multicast traffic across the four different links between R2 and R3 to provide distribution of the multicast traffic. This can be accomplished using the multicast multipath command on both R2 and R3. The source generates traffic for two multicast groups, 239.1.1.1 and 239.2.2.2. The multicast domain configuration in this example is a simple PIM ASM with static RP.

Example 5-14 IOS Multipath Configuration


R3 Configuration
<..>
ip multicast-routing
ip multicast multipath s-g-hash next-hop-based
ip cef
!
!
interface Loopback0
 ip address 10.3.3.3 255.255.255.255
 ip pim sparse-mode
!
interface Ethernet0/0
 ip address 10.1.6.1 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet1/0
 ip address 10.1.2.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet2/0
 ip address 10.1.3.2 255.255.255.0
 ip pim sparse-mode
!
interface Ethernet3/0
 ip address 10.1.4.2 255.255.255.0
 ip pim sparse-mode
!
router eigrp 1
 network 10.0.0.0
!
 ip pim rp-address 10.3.3.3


Examine what happens to the RPF checks in the state entries for our multicast groups. Both the 239.1.1.1 and 239.2.2.2 sources send traffic to the respective receiver in the multicast topology.

Example 5-15 shows the path taken by the two streams at R3.

Example 5-15 IOS Multipath RPF


R3#show ip rpf 10.1.50.2 239.1.1.1
RPF information for ? (10.1.50.2)
  RPF interface: Ethernet2/0
  RPF neighbor: ? (10.1.3.1)
  RPF route/mask: 10.1.50.0/24
  RPF type: unicast (eigrp 1)
  Doing distance-preferred lookups across tables
  Multicast Multipath enabled. algorithm: next-hop-based
  Group: 239.1.1.1
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base
R3#show ip rpf 10.1.50.2 239.2.2.2
RPF information for ? (10.1.50.2)
  RPF interface: Ethernet3/0
  RPF neighbor: ? (10.1.4.1)
  RPF route/mask: 10.1.50.0/24
  RPF type: unicast (eigrp 1)
  Doing distance-preferred lookups across tables
  Multicast Multipath enabled. algorithm: next-hop-based
  Group: 239.2.2.2
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base


Multicast Traffic Engineering: Deterministic Path Selection

In the previous multipath scenario, the network architects have chosen equal-cost paths for network-forwarding. This decision was likely made to maximize network-forwarding efficiency. If multicast application traffic is dense enough to consume significant bandwidth, it seems like a wise course of action to enable multicast multipath. However, in some networks, like financial networks, there may arise a need to separate multicast data transmission from unicast transmissions across WAN links.

This design choice could also achieve better optimization of bandwidth for multicast and unicast applications. Consider a network that is transporting an IP-TV multicast stream. The stream may be small enough to need only one pipe but large enough to cause resource constraints on unicast traffic, thereby justifying its own WAN link.

Figure 5-9 illustrates just such a scenario. The administrators of the network have decided that a corporate IP-TV application consumes a great deal of bandwidth, consuming enough resources to put critical unicast traffic (Path 2) at risk. The network architect has asked that a redundant, non-unicast link (Path 1) be maintained for this purpose.

Image

Figure 5-9 Deterministic Multicast Paths

Remember the rule of one RPF path that we just discussed? By default, in this topology, Path1 and Path 2 are equal-cost links in the EIGRP topology table for 10.1.50.x and 10.1.51.x subnets (multicast source and receiver subnets). Based on RPF rules, the highest interface IP address is selected as the RPF interface for the outgoing interface list (OIL), which in this case is Path 2, as shown in Example 5-16.

Example 5-16 Unicast RIB with Multiple Paths


R3# show ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
       a - application route
       + - replicated route, % - next hop override

Gateway of last resort is not set

      10.0.0.0/8 is variably subnetted, 12 subnets, 2 masks
D        10.1.1.0/24 [90/307200] via 10.1.3.1, 00:00:11, Ethernet2/0
                     [90/307200] via 10.1.2.1, 00:00:11, Ethernet1/0
C        10.1.2.0/24 is directly connected, Ethernet1/0
L        10.1.2.2/32 is directly connected, Ethernet1/0
C        10.1.3.0/24 is directly connected, Ethernet2/0
L        10.1.3.2/32 is directly connected, Ethernet2/0
C        10.1.4.0/24 is directly connected, Ethernet3/0
L        10.1.4.2/32 is directly connected, Ethernet3/0
C        10.1.6.0/24 is directly connected, Ethernet0/0
L        10.1.6.1/32 is directly connected, Ethernet0/0
D        10.1.50.0/24 [90/332800] via 10.1.3.1, 00:00:11, Ethernet2/0
                      [90/332800] via 10.1.2.1, 00:00:11, Ethernet1/0
D        10.1.51.0/24 [90/307200] via 10.1.6.2, 00:00:11, Ethernet0/0
C        10.3.3.3/32 is directly connected, Loopback0
R3# show ip rpf 10.1.50.2
RPF information for ? (10.1.50.2)
  RPF interface: Ethernet2/0
  RPF neighbor: ? (10.1.3.1)
  RPF route/mask: 10.1.50.0/24
  RPF type: unicast (eigrp 1)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base


Because of the RPF rules, interface e2/0 will always be selected for the multicast flow. Further complicating the design, e2/0 might also be taken by unicast traffic, meaning ECMP load-sharing will occur between both Paths 1 and 2. To prevent critical unicast traffic from taking Path 1 (the multicast-only path) EIGRP is configured to prefer Path 2 (10.1.2.x link) over Path 1 (10.1.3.x). To prevent multicast-forwarding over the unicast-only path, PIM is not configured on the Path 2 interfaces. If the network is configured in this manner, the PIM state for IP-TV application traffic has failed RPF checks and is incomplete.

How can we resolve this issue? What we need is a way to manually adjust the state table—in this case, the mroute table—to consider the PIM-enabled interface for Path 1 as a potential RPF interface. Great news! This is the exact purpose of static table entries (in this case, “mroutes”), and they are easy to understand and configure! Figure 5-10 illustrates this configuration.

Image

Figure 5-10 Static Multicast State Entries

To add static state entries, use the commands outlined in Table 5-7.

Image

Table 5-7 Static mroute CLI Commands


Note

Notice the language used in the command syntax for IOS/IOS-XE. Do not let the word mroute confuse you. Remember, the mroute table is not a table of routes but rather a table of multicast state entries. Adding a static mroute does not add a static state entry in and of itself. What it does add is a static RPF “OK” when the PIM process checks RPF during state creation. You cannot add state to a non-PIM interface or when no source or subscribed receivers are present, where the potential for forwarding does not exist. Much like a unicast static route entry, the underlying physical and logical infrastructure must match the configured entry to cause the state to occur within the table. Configuring a static unicast route where no underlying interface or address exists results in a failed route. The same is true for multicast state entries.


Example 5-17 shows using a static mroute entry to adjust the behavior of PIM state creation to include Path 1 in the RPF calculation.

Example 5-17 Static mroute Entry Output and Change in Forwarding Path


R3# sh running-config | include ip mroute
ip mroute 10.1.50.0 255.255.255.0 10.1.2.1
r3# sh ip rpf 10.1.50.0
RPF information for ? (10.1.50.0)
  RPF interface: Ethernet1/0
  RPF neighbor: ? (10.1.2.1)
  RPF route/mask: 10.1.50.0/24
  RPF type: multicast (static)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base


Static state entries can be a useful way to perform multicast traffic engineering in a given simple network to separate multicast and unicast flows. This is especially valuable in scenarios like the above, or when asynchronous routing is desired in the network design.

When deploying multicast, it is wise to consider the underlying IP unicast network. The most preferred network design for IP multicast networks is one where the multicast and unicast forwarding paths fully align. It certainly simplifies troubleshooting and network management in a profound way. Network architects should deviate from this practice only when the requirements provide no alternatives. Static state entries are of course only one way to control deterministic forwarding in these scenarios.

In large networks with redundant links, to achieve the separation of the multicast traffic from the unicast, a dynamic way is more desirable. This is achieved using the BGP multicast address family. With BGP address families, the multicast network needs to be advertised and the next-hop prefix needs to be resolved via recursive lookup in the IGP to find the upstream RPF interface. In our example, the address 10.1.50.x (source) and 10.1.51.x (receiver) is advertised in the multicast BGP address family. Figure 5-11 depicts using eBGP routing to achieve similar results to using static state entries for traffic engineering.

Image

Figure 5-11 Deterministic Multicast Pathing Using eBGP

The network administrator would use the eBGP IOS configurations in Example 5-18 on R2 and R3 to achieve path determinism.

Example 5-18 Deterministic Multicast BGP Configuration


R2
router bgp 65002
 bgp log-neighbor-changes
 neighbor 10.1.2.2 remote-as 65003
 !
 address-family ipv4
  neighbor 10.1.2.2 activate
 exit-address-family
 !
 address-family ipv4 multicast
  network 10.1.50.0 mask 255.255.255.0
  neighbor 10.1.2.2 activate
 exit-address-family


R3
router bgp 65003
 bgp log-neighbor-changes
 neighbor 10.1.2.1 remote-as 65002
 !
 address-family ipv4
  neighbor 10.1.2.1 activate
 exit-address-family
 !
 address-family ipv4 multicast
  network 10.1.51.0 mask 255.255.255.0
  neighbor 10.1.2.1 activate
 exit-address-family


Examine the changes in multicast path selection behavior using this configuration. The show ip route command on the same topology shows the IGP RIB with the respective multicast routes. The command captured at R3 displays the output in Example 5-19.

Example 5-19 IGP RIB


      10.0.0.0/8 is variably subnetted, 12 subnets, 2 masks
D        10.1.1.0/24 [90/307200] via 10.1.3.1, 03:59:35, Ethernet2/0
                     [90/307200] via 10.1.2.1, 03:59:35, Ethernet1/0
C        10.1.2.0/24 is directly connected, Ethernet1/0
L        10.1.2.2/32 is directly connected, Ethernet1/0
C        10.1.3.0/24 is directly connected, Ethernet2/0
L        10.1.3.2/32 is directly connected, Ethernet2/0
C        10.1.4.0/24 is directly connected, Ethernet3/0
L        10.1.4.2/32 is directly connected, Ethernet3/0
C        10.1.6.0/24 is directly connected, Ethernet0/0
L        10.1.6.1/32 is directly connected, Ethernet0/0
D        10.1.50.0/24 [90/332800] via 10.1.3.1, 03:59:35, Ethernet2/0
                                [90/332800] via 10.1.2.1, 03:59:35, Ethernet1/0
D        10.1.51.0/24 [90/307200] via 10.1.6.2, 03:59:35, Ethernet0/0
C        10.3.3.3/32 is directly connected, Loopback0
r3#


However, the BGP multicast address family takes precedence on the MRIB selection, as demonstrated in Example 5-20.

Example 5-20 BGP RIB


R3# show ip bgp ipv4 multicast
BGP table version is 3, local router ID is 10.3.3.3
Status codes: s suppressed, d damped, h history, * valid, > best, i- internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i–- IGP, e–- EGP, ?–- incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 *>  10.1.50.0/24     10.1.2.1            307200             0 65002 i
 *>  10.1.51.0/24     10.1.6.2            307200         32768 i
R3#


The RPF for 10.1.50.1 from R3 shows Path 1 as preferred based on the BGP multicast topology table, as demonstrated in the output in Example 5-21.

Example 5-21 Deterministic Multicast BGP RPF Result


R3# show ip rpf 10.1.50.0
RPF information for ? (10.1.50.0)
  RPF interface: Ethernet1/0
  RPF neighbor: ? (10.1.2.1)
  RPF route/mask: 10.1.50.0/24
  RPF type: multicast (bgp 65003)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base


This method of creating RPF entries for the multicast state machine is a more dynamic than using static mroute entries, as shown in the previous examples.

In some enterprise networks, customers must transport multicast across provider or non-enterprise controlled segments. In order to route multicast across these provider links, the service provider network needs to support the enterprise implementation of PIM and to become a natural part of the PIM domain. Some providers may not support direct multicast interaction like this on certain types of links, such as, for example, MPLS WAN services, or they may not support the type of PIM mode deployed by the enterprise.

In the next example, we use the same deterministic routing scenario from above, but add a non-enterprise controlled network segment that does not support multicast. As shown in Figure 5-12, Segment A (encompassing both Path 1 and Path 2) is the non-enterprise controlled segment that needs multicast support. In this example, the provider does not support multicast transport, leaving Segment A configured with PIM disabled. This would obviously cause RPF failures, spawning incomplete state entries in the mroute tables of all routers. Figure 5-12 also shows that an easy solution exists for this type of problem: the use of a GRE tunnel to forward multicast packets.

Image

Figure 5-12 Tunneling Multicast over PIM Disabled Paths

Under such a scenario, the GRE tunnel must establish full IP connectivity between routers R2 and R3. The GRE tunnel interfaces should be configured for PIM, and a PIM neighborship should exist across the tunnel. It would not be prudent to configure unicast routing over the newly formed tunnel, but the transport of MRIB and multicast RPF check still needs to be moved to the overlay segment. Without the existence of unicast routing, the tunnel interface will fail RPF checking.

In this situation, you have a choice to use static state entries or dynamic BGP multicast address families to enable multicast transport across Segment A. The principles of MRIB build-up will be the same and will follow the same rules. The GRE tunnel interface must become the RPF interface.

IP Multicast Best Practices and Security

Every implementation of multicast is unique, in part because every IP unicast underlay is unique. Multicast networks are often unique in and of themselves, which may place special constraints on network design. In spite of this uniqueness, certain elements should exist in every multicast network design.

Following current best practices for network architectures is paramount. Some of these items include hierarchical design, redundancy, resiliency, high-availability, limiting Layer 2 scope, security, and so on. Building a strong foundation from which to add services only enhances your ability to manage, operate, and troubleshoot your network infrastructure. When adding multicast to the network, the following sections are elements to strongly consider.

Before Enabling PIM

Many network engineers make the mistake of simply turning on multicast in complex network topologies with the expectation that it will instantly function in an ideal manner. Remember, if it was that easy, we would be out of work.

There are several items that must be considered before configuring a network for IP multicast:

Image CEF/dCEF/MLS CEF considerations for those platforms that require it. Without CEF on these platforms, multicast packets are process-switched which can overwhelm the central processing unit (CPU) of the networking device (very bad).

Image Unicast routing must be enabled and operational on the network. Remember that PIM is an overlay to a successful L3 unicast design. Carefully consider any redundant paths in the network, looking for possible asynchronous routing that could cause RPF check failures.

Image Consider the multicast applications being placed on the network. Network architects should select the most ideal multicast features and configurations for these applications.

Image Remember that groups in the 224.0.0.* range are reserved for routing control packets. A proper schema should be a design requirement. When creating your schema, do not forget to account for (and, if necessary, eliminate) MAC overlapping group ranges, as described in Chapter 2!

Image The administrator should be familiar with IPv4 and IPv6 multicast routing configuration tasks and concepts.

Image Administrators should be aware of the various multicast configurations and features for a given platform. Not all platforms support all features or modes. Make sure you do not select a PIM mode (such as, for example, dense-mode or PIM-BiDir) if it is not supported universally across the intended PIM domain. This chapter establishes the following protocol selection guidelines:

Image Dense-mode is not recommended except in legacy environments where it may already exist. It is likely that DM is not supported by your current platform.

Image In general, if the application is one-to-many or many-to-many in nature, then PIM-SM can be used successfully.

Image For optimal one-to-many application performance, SSM is appropriate, but it requires IGMP version 3 client support.

Image For optimal many-to-many application performance, bidirectional PIM is appropriate, but hardware support is limited to certain Cisco devices

Table 5-8 provides an example of multicast applications and the relationships between sources and receivers.

Image

Table 5-8 Application Examples

Image You should have a proper PIM design for each desired protocol version, with an understanding of which protocol you will run and why, before moving to implementation.

In addition, each platform and each operating system has specific tasks and configuration parameters required to enable IP multicast functionality. For more detailed information, please refer to the individual configuration guides for each operating system found at Cisco.com. This book uses examples from the latest versions of each operating system at the time of writing. Remember to review current configuration guides for changes.

General Best Practices

Multicast can be your best friend or your worst enemy. As the manual for the dangerous equipment hidden away in your garage suggests, “be sure to read, understand, and follow all the safety procedures.”

Tuning the Network for Multicast

Most of the control-plane stress in a multicast network will be at the access edge, as well as at any RP routers. This occurs because receivers and sources are located on the edge of the network. A routed network may have only a few branches, but the edge devices must efficiently replicate packets for many potential interfaces, in addition to managing the IGMP subscription process. It is best to maximize efficiency at the edge for this reason, especially if the expected multicast usage is high with multiple types of many-to-many or one-to-many applications.

Architects should start by ensuring that the Layer 2 forwarding domain is fast, efficient, and loop-free. Multicast can substantially increase Layer 2 flooding. Ultimately you should strive for a design that limits flooding domain size by controlling VLAN sprawl and using Spanning Tree Protocol (STP) wisely. Limiting VLAN sprawl and excessive use of VLANs on access switches eliminates massive packet flooding across a number of switches. Unnecessary VLANs should be effectively pruned from switch trunks. Manual configuration of interfaces and the use of virtual trunking protocol (VTP) transparent mode should be considered to enforce this policy. In addition, storm control can help alleviate potential configuration issues with multicast sources, protecting switches from inappropriate multicast and broadcast flooding.

IGMP snooping is also an excellent way to limit flooding behavior at the Layer 2 edge. Remember, without IGMP snooping, flooding of multicast packets will occur VLAN- or switch-wide, depending on configuration. If a VLAN spans many switches, the results of excessive flooding can take a toll on switch-forwarding resources. Remember that IGMP snooping limits replication and flooding to only those ports with subscribed hosts.


Note

If you have a network in which multicast is only local to a given Layer 2 domain (there is no multicast-enabled L3 gateway and no PIM), IGMP snooping is still your friend. However, remember that IGMP requires that an IGMP querier be elected for each VLAN with IGMP subscriptions. If there is no Layer 3 device, and no PIM configuration, switches by default assume there is no gateway and will not elect a querier as part of the natural process. To resolve this issue, a network administrator should either configure one device in the switch domain with PIM, or manually configure one switch as the IGMP querier.


Network architects should consider designs that make use of new technologies like the Cisco Virtual Switching System (VSS) to eliminate STP-blocked ports in the forwarding path. This helps optimize failure convergence times as well as packet flow through the edge, maximizing available bandwidth and predictability. If such a design is not possible, STP should be tuned to improve STP convergence. Rapid STP should be a minimum requirement for the multicast network edge.

Because multicast is an overlay on the unicast topology, multicast traffic will not work if there is an IP unicast network outage. If multicast communications are mission critical, or at the very least are important to the business, the same care and effort put into the unicast network design should be put into the multicast overlay design. It is also wise to tune and adjust the IP unicast network to maximize IP multicast traffic throughput and to minimize network disruption.

IP unicast interior gateway protocols (IGPs) used for routing (RIP, EIGRP, OSPF, and so on) should be both secured and optimized in the network. Exterior gateway protocols (BGP) should also be secured and optimized, especially if they are a part of the multicast domain control plane as described earlier. When possible, routing protocols should be locked down and adjacencies should be verified using ACLs and MD5 encryption when possible. This prevents intruders from injecting attack-based routing information into the network, possibly disrupting the flow of multicast traffic.

Protocol timers should be tuned in favor of fast convergence. For example, EIGRP can use lowered hello and dead timers to increase failure detection and recovery speeds. OSPF timer adjustments may also be warranted, including optimizing SPF, hello, dead-interval, and LSA timers. BGP perhaps allows for the most flexibility for adjusted and optimized timers. Be sure to understand fully the implications of any protocol tuning before you proceed with configuration, and ensure that any timer changes are implemented universally. Mismatched timers can cause major protocol adjacency flaps for some protocols. The bottom line: the faster the unicast network converges on changes and disruptions, the fewer interruptions there will be to IP multicast traffic.

Remember also, multicast convergence is not aligned to the convergence results you get with unicast. If you tweak unicast convergence to one second, then multicast convergence will be a factor of five to six times unicast convergence. Tweaking of the multicast control plane via sub-second PIM and RPF back-off feature can reduce this gap in convergence between two and three times compared to the standard. If you choose to make multicast timing adjustments a requirement, then multicast tweaks should be assessed with the stability of the control plane as the main goal, keeping in mind the total number of states. When these timers are changed on any router in the network, all PIM routers in the network should be configured with matching timers.

Finally, network architects should ensure that the IP unicast network is robust, reliable, and simple. Architects should look for any situations that may affect multicast-forwarding. Look for tunnel interfaces, or asynchronous routing that may negatively affect RPF checks. Be aware of other network protocols like MPLS or VPNs (for example, GetVPN or DMVPN). Special considerations may be required for certain technologies or topologies when it comes to multicast. Always check with the latest design guide for those technologies implemented on the network.

Manually Selecting Designated Routers

On any segments with multiple PIM speakers, PIM software will select a PIM designated router (DR). Remember, the role of the DR is to forward multicast data for any groups attached to the segment. That means that it serves as the segment multicast-forwarder, as well as the control point for communication with any RPs for each group. The DR is essentially the PIM manager for that segment.

For an ASM network, this means the DR sends PIM join/prune messages to any RPs for any group subscriptions on the segment. The ASM DR will look up the corresponding RP mapping for each group and begin the control process. This also includes sending unicast-encapsulated multicast messages to the RP from any source on the segment, registering the source and completing the shared tree.

When the DR receives a direct IGMP membership report from a directly connected receiver, it is easy to make a complete shortest-path tree because the DR is obviously in the forwarding path. However, there is no rule that the PIM DR must be in the shortest-path tree. Examine the PIM network in Figure 5-13.

Image

Figure 5-13 Out-of-Path Designated Router

In this network, routers R3 and R4 provide redundant paths for the unicast network. The source, 239.10.10.10, however, is reachable only via the primary unicast path running between R4 and R2. If the PIM process has elected R3 as the designated router for the LAN segment connecting R3 and R4, the DR for the segment would not be in the forwarding path. Although this design would still work, it would also be inefficient. Why not make R4, the in-path next-hop router, the PIM-DR? It would certainly improve efficiency, especially if there are a large number of hosts on the LAN segment and many groups to manage.

You should also consider the impact of making all redundant paths PIM-enabled in this example network. Look at the adjustments made to the network in Figure 5-14. In this case, routers R3 and R4 are now redundant routers using a mechanism like Hot Standby Router Protocol (HSRP) to load-balance unicast traffic and all upstream paths are PIM-enabled. Like with HSRP, the administrator would also like to load-balance the PIM management between the two hosts. If there are many multicast-enabled VLAN segments terminating on these routers, you can achieve similar results by alternating the DR between R3 and R4 for each VLAN (even and odd), as shown. The router providing the DR should align with the active HSRP peer for that VLAN. This helps align traffic between unicast and multicast flows using the same gateway router, while also providing failover for multicast flows. This is configured using the DR priority interface command option, as explained in Chapter 3.

Image

Figure 5-14 Load Balancing with Designated Routers


Note

In many modern Cisco networks, the concept of discrete paths and gateways is fading in part because of technologies like Cisco’s Virtual Switching System (VSS) and virtual PortChannel (vPC) that allow a pair of L2/L3 switches to appear as a single switch and gateway. This eliminates the need to specify the DR function in designs like the one above. Consider that in a VSS design, for example, the two switches/routers in Figure 5-14, R3 and R4, could actually be a single pair of Layer 3 switches acting as a single gateway. This is the preferred way to design LAN access for multicast if the technology is available.


For PIM-SSM networks, inefficient DR placement in the path can be more problematic. The SSM-DR generates (S, G) PIM join messages that propagate through the path back toward the source. The path from the receiver to the source is determined hop by hop, and the source must be known and reachable by the receiver or the DR. If the DR is not in the direct path toward either the source or the receiver, unintended consequences can occur.

In either case, for large networks, manually forcing the outcome of DR elections to optimize network behavior is sometimes best. This is especially true at the network edge where sources and receivers are connected. Properly selecting the DR can improve both control plane and data-plane efficiency.

As discussed previously, PIM routers use information contained in the PIM hello message headers to determine the DR. Any PIM-speaking router on the segment can become the DR, assuming it meets the selection criteria. The rules of PIM-DR selection force the router with the highest priority to become the DR. If the DR priority is the same, the router with the highest IP address in the segment is elected DR. PIM-DR priority values range from 1 to 4,294,967,295, with the default being 1. If no priority is selected, the IP address of the PIM router is used, the highest address becoming the DR. Remember, the PIM address is derived from the interface that sent the hello message.

To change the PIM-DR priority on IOS or NX-OS interfaces, use the ip pim dr-priority <0-4294967294> command. The same function in IOS-XR is done under the router pim submode using the command dr-priority <0-4294967294>; this can be applied as an interface default or per interface under the submode.

You can display the configured DR priority and DR-elected router address for each interface by issuing the command show ip pim interfaces in IOS/XE or show pim interface in IOX-XR. For NX-OS, use the show ip pim neighbors command instead. The output in Example 5-22 is from an IOS router, notice the DR Prior field and the DR address field in the output:

Example 5-22 show ip pim interface Command Output


CR1#show ip pim interface

Address          Interface     Ver/   Nbr    Query  DR     DR
                               Mode   Count  Intvl  Prior
192.168.63.3     Ethernet0/0   v2/D   1      30     1      192.168.63.6
192.168.43.3     Ethernet0/1   v2/S   1      30     1      192.168.43.4
192.168.31.3     Ethernet0/2   v2/S   1      30     1      192.168.31.3
192.168.8.1      Ethernet0/3   v2/S   0      30     1      192.168.8.1


Basic Multicast Security

Security is an important part of any network or application design. In years past, many engineers considered security a set of point features that were an afterthought to the design of the network, much like an overlay. The technology industry in general has learned that this is both an ineffective and dangerous approach to securing networks and applications. Today’s networks must be designed from the ground up with intrinsic security as a main essential objective.

Attacks to multicast networks can come in many forms. Because multicast is an overlay to an existing, functional unicast network, the same attack vectors and weaknesses that affect a unicast network impact multicast-forwarding. If the unicast network is not secure, the multicast network is equally vulnerable. In addition, IP multicast inherently increases the surface area of possible attack vectors.

Abstractly speaking, the key factors to security design are integrity, confidentiality, and availability. When it comes to IP multicast end points, any security mechanism that enables these factors to protect unicast applications should also be applied to multicast applications. For example, an enterprise security policy may require that encryption with authentication be enabled on any mission critical, secure applications. Because multicast packets are essentially just IP packets, this type of security policy should be applied equally to multicast applications. Another example is protecting multicast routers, switches, and other infrastructure from unauthorized access.

Generally, the objective of most network-concentrated security policies is to protect network availability for applications and users. Attack vectors and weaknesses used to induce availability security incidents are either focused on network resources (such as, for example, bandwidth or queue space), or the network devices themselves. Examples of these types of attacks and vulnerabilities include denial-of-service (DoS) attacks, unauthorized access of network devices, protocol spoofing, packet interception, and others. This section does not address the myriad ways in which a network or multicast domain can be compromised, but instead it attempts to deal mainly with protecting the availability of the multicast network.

Protecting Multicast Control-plane and Data-plane Resources

A network infrastructure or device that is overwhelmed with a specific task will not be able to adequately address the requests of other processes. As we require the network to perform additional functionality or run more processes, we begin to spread those resources (CPU, memory, TCAM, and so on) across multiple functions. Care must be taken to limit how those resources are utilized. Control-plane policing (CoPP) is beyond the scope of this book; however, it is prudent to understand how this technology can be utilized to protect network devices and ultimately the integrity of the entire network infrastructure.

One common way to protect multicast control-plane resources is to limit the number of state entries that a multicast router allows in the MRIB. This protects the router from misconfigured network consumption as well as potential denial-of-service attacks. It also protects the underlying IP unicast network control-plane resources by preventing the CPU and memory from being overwhelmed by multicast route churn. This is known as a state maximum or a route-limit. See Table 5-9 for command details.

Image

Table 5-9 Limiting Multicast State Entries

These commands ultimately limit the total number of (*,G) and (S,G) entries that can be installed in a router. Valid limits are dependent on platform, but for a typical IOS router they are between 1 and 2,147,483,646. The default value is to allow the maximum. When the router reaches the configured route limit and a new PIM join is received with a new potential group, the router issues a warning message and fails to install state for that entry. No existing entries are removed.

You can also apply a similar limit to gateway routers running IGMP by limiting the number of IGMP-managed group entries in the state table. The ip igmp limit, or maximum groups command prevents any installation of additional group entries into the IGMP cache after the limit is reached. This also prevents the router from issuing any PIM messages for any groups exceeding the limit. Table 5-10 shows the command usage.

Image

Table 5-10 Limiting IGMP Subscription Entries

These commands can be applied either globally or at the interface level. IOS XR uses a different command for each command locality. In addition, the IOS version of this command allows administrators to create a list of exceptions using an ACL. An explicitly excepted group will not be limited, regardless of cache size. Nexus platforms can also make exceptions, but they do so by policy maps as opposed to ACLs.

Another way to allow IGMP to prevent unwanted state creation and PIM messaging is to create an explicit list of allowed and denied groups. This is a common security requirement in many networks. Table 5-11 provides the commands necessary to limit group management by ACL.

Image

Table 5-11 Use ACLs to Permit Multicast Subscriptions

Finally, protecting data-plane resources is also an important part of basic multicast security. One way this can be achieved is by simply rate-limiting the number of allowed multicast packets on a given interface. In IOS, this is done using the using the ip multicast rate-limit command, as follows.

ip multicast rate-limit {in | out} [video | ­whiteboard] [group-list access-
list
] [source-list access-list] kbps

To achieve rate-limiting for multicast in XR and NX-OS, a simple rate-limit can be used under the normal modular quality-of-service (QoS) configuration CLI. IOS devices can also apply multicast traffic limits in the same manner. For more information on how to use modular QoS to rate-limit multicast traffic, see the individual QoS configuration guides for each platform. The multicast-rate-limit should be taken care by the QoS design and its parameters; it is not a must to have multicast rate-limiting unless there is a specific requirement for application usage.

Securing Multicast Domains with Boundaries and Borders

Just as with unicast networks, multicast boundaries should exist where the security policy and the needs of the application dictate. Remember the earlier discussion around scoping. Scoping addresses and bounding a domain within the construct of the IGP are important for securing the domain. You do not want your multicast packets leaking outside the logical domain scope. A well-planned addressing schema, wise RP placement, and a clear understanding of natural network boundaries make scoping a domain significantly easier.

In many cases, a boundary can occur between two domains overlaid on the same IGP. These would not necessarily be natural boundaries and should be enforced with policy. In other cases, like in large network-wide domains, it means that a natural boundary will occur at the IGP edge of the network. For example, the scope of an internal multicast application that is corporate-wide will end at any Internet-facing interfaces or at the edge of the autonomous system (AS).

The network administrator can make an AS boundary simply by not configuring PIM on any external interfaces. If PIM is not required outside the AS, there is no need to have those interfaces included in the PIM domain. This creates a natural PIM control-plane boundary between the routers within and without the PIM domain. Figure 5-15 depicts this natural control-plane edge with the Internet, as well as a configured boundary for the global multicast domain.

Image

Figure 5-15 Multicast Boundaries

The definition of a domain that we have used thus far is fairly loose and, in the case of Figure 5-15, follows the natural control-plane edge. Some applications and environments may need a more restrictive way to define boundaries. In addition, for locally scoped applications, those defined by administratively scoped group addresses, it may be necessary to create boundaries and scope outside those services offered by routers. Firewalls and router access lists (ACLs) can (and in most cases should) use rules to prevent the spread of multicast information beyond a specific boundary. In the same way, multicast hosts can also be configured for similar security measures if it is warranted.

Application developers can also be part of the bounding process. One way the application can be bound is by effectively using the time-to-live (TTL) counter in multicast source packets to limit the scope of transmission. Scope, in a network, is the number of times (usually by a router) a packet can be forwarded, known as hops. IP multicast packets have IP headers identical to those used in unicast, and the normal rules of transmission apply. Each router in the path inspects the TTL counter of the packet. If the router forwards the packet the TTL counter is decremented by one. If the application sets the multicast IP TTL to 4, then the region is scoped to four hops. This is a good way to ensure that local data stays local.

Perhaps the best way to segment a domain or region is to configure hard network boundaries. There are different ways of achieving this. One easy way is to prevent the spread of dynamic multicast group mappings. All Cisco router operating systems provide a way to limit the scope of both the Auto-RP and BSR updates. If routers outside the configured scope do not have the RP mappings, they will not be able to participate in tree-building within that domain. There are two ways to limit the scope, depending on the protocol announcement method in use.

The first should be fairly obvious, using the scope option in the Auto-RP commands, as shown here:

ip pim send-rp-announce Loopback0 scope 3
ip pim send-rp-discovery Loopback0 scope 3

The scope option sets a TTL in the RP-Candidate and RP-Announce messages. If the domain has only four routers, then setting a scope to three should be sufficient to prevent the proliferation of the group mappings for that domain. Using the following IOS/XE commands, for example, accomplishes this boundary requirement. The ability to control the multicast RP control plane via TTL is one of the advantages of using Auto-RP in a scoped domain.

There are, of course, more granular methods of controlling Auto-RP announcements as well. But this method also applies to creating a secure border for any multicast group address or schema. One easy way to provide a boundary around a domain and Auto-RP announcements is to use the boundary interface-level configuration command, as shown in Table 5-12.

Image

Table 5-12 Configuring Multicast Boundaries

To see how this works in practice, review the setup in diagram Figure 5-16. This simple multicast implementation uses an Auto-RP configuration and two multicast domains: global scope 239.1.1.0/24 and local scope 239.192.0.0/16. The diagram only shows two RPs each for global and local; the RP HA portion of the setup is not shown.

Image

Figure 5-16 Multicast Boundary Example

We can now examine the configuration required for this setup. R4 router is the spoke WAN router; we will add a multicast boundary configuration on this router. Example 5-23 shows the commands needed to enable the boundary configuration.

Example 5-23 Configuring a Proper Multicast Boundary List


ip access-list standard LSCOPE
 deny   224.0.1.39
 deny   224.0.1.40
 deny   239.192.0.0 0.0.255.255
 permit any


The ACL LSCOPE denies Auto-RP advertisement leakage outside the domain (which will end at interface Etherenet0/0) by denying groups 224.0.1.39 and .40. The data plane for the local groups in the 239.192.0.0/16 supernet is also contained within the local scope by using a deny statement. All other groups are allowed in the access-list with the permit any statement at the end.

The next step is to apply the access-list to the boundary interface (Ethernet0/0), using the ip multicast boundary command with the ACL name, as shown in the output in Example 5-24.

Example 5-24 Applying a Multicast Boundary List


r4-spoke#show running-config interface ethernet 0/0
Building configuration...

Current configuration: 118 bytes
!
interface Ethernet0/0
 ip address 10.1.3.2 255.255.255.0
 ip pim sparse-mode
 ip multicast boundary LSCOPE out
end



Note

Make special note of the out keyword used in Example 5-24. If the out keyword is not applied, the router assumes that it is an implicit deny and will not allow important group mappings to occur. In this example, it would not allow global group mappings to occur in the local scope.


Issuing the show ip pim rp mapping command on R3-WAN shows the global multicast group block 239.1.1.0/24 and global RP mappings, as demonstrated in Example 5-25.

Example 5-25 Boundary Group Mapping


r3-WAN# show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 239.1.1.0/24
  RP 192.168.2.2 (?), v2v1
    Info source: 192.168.2.2 (?), elected via Auto-RP
         Uptime: 07:39:13, expires: 00:00:21
r3-WAN#

The 'show ip pim rp mapping' at local domain node (R5) is:

r5-LRP# show ip pim rp  map
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
This system is an RP-mapping agent (Loopback0)

Group(s) 239.1.1.0/24
  RP 192.168.2.2 (?), v2v1
    Info source: 192.168.2.2 (?), elected via Auto-RP
         Uptime: 07:22:34, expires: 00:00:26
Group(s) 239.192.1.0/24
  RP 192.168.5.5 (?), v2v1
    Info source: 192.168.5.5 (?), elected via Auto-RP
         Uptime: 07:22:34, expires: 00:00:25
r5-LRP#


The local RP will have overlapping of two multicast domains and two RPs (global and local RP).

BSR uses a less centralized but more sophisticated method of limiting the scope of BSR advertisements, as shown in Table 5-13.

Image

Table 5-13 PIM Border Configuration Commands

The bsr border command is issued globally in NX-OS, on the interface in IOS/XE and under the router pim interface subcommand mode in IOS-XR. The command creates a BSR border (boundary), preventing BSR advertisements from being forwarded on any participating interfaces. Consequently, if the scope of a region is still four routers, each router with an external-facing interface would need to be configured with this command. Although this requires additional configuration work, the border becomes more flexible and granular, addressing a major weakness in the Auto-RP scope option. (The scope should be equal to the network diameter—the number of hops—but the placement of the RP may not be ideal in relation to the required hop count.)


Note

NX-OS also provides the option of applying the bsr border command globally and then allowing specific interfaces to forward using the ip pim bsr forward interface configuration command.


The primary intent of this section is to illustrate that multicast domains are a little different and more fluid by definition than their unicast counterparts. Where and how you divide domains is a critical design decision that has a myriad of consequences. It affects the scope and topology of an application, the size of the MRIB and MFIB tables, the security of the applications, the need for specialized configuration, and the types of devices that will access the application.

There are other ways to control dynamic RP-to-group mapping. The ip pim accept-rp command is used in conjunction with the RP address as an optional ACL used to limit dynamic RP learning to specific group addresses. If the RP is also the first hop designated router (DR) for directly connected sources, PIM register packets will not be filtered using the ip pim accept-register command. Table 5-14 shows the command syntax for the accept-rp configuration option.

Image

Table 5-14 Accept-RP Commands

The configuration in Example 5-26 accepts join or prune messages destined for the RP with an IP address of 10.3.3.3 destined for the multicast group of 239.1.1.1.

Example 5-26 Accept-RP Configuration Example


ip pim accept-rp 10.3.3.3 Group_Address
!
ip access-list standard Group_Address
 permit 239.1.1.1


It is important to have a clear delineation between the edge of the multicast network control plane and other network connections, especially external connections, such as the Internet. The first and most important step to accomplish this is to disable PIM on any interfaces not in the PIM forwarding path or any externally facing interfaces. This should prevent a neighbor relationship from forming on these interfaces. It is common practice to install a protective ACL on external interfaces as well (such as anti-spoofing or other basic filters). These ACLs should end with a deny ip any any and should not include a permit for any unwanted multicast traffic.

However, it occurs in some broadcast networks, like Metro-Ethernet, that PIM is needed on a broadcast media where both internal and external paths may lie. In other cases, PIM neighbor relationships may be undesirable for certain peers but desirable for others. Either way, a method is needed to filter out unwanted neighbors, permitting PIM hello messages only from explicit addresses. The neighbor-filter command, shown in Table 5-15, can assist with this configuration need.

Image

Table 5-15 Neighbor Filter Commands

Finally, any other measure that a network administrator would use to create a boundary between networks is appropriate to apply to the multicast edge. Firewalls are an excellent device for edge-network separation, and many firewalls, including the Cisco ASA firewall appliance, can inspect multicast packets. A single-context firewall in routed mode will be able to participate in the multicast control and data plane. For multi-context firewall deployment, it is recommended to deploy firewalls in transparent mode for multicast support for security and state inspection. Also, network administrators should ensure that corporate security policies include IP multicast security measures at any level where it is appropriate in the infrastructure or application architecture.

Protecting Multicast RPs

The key to protecting multicast RPs is to protect the control-plane resources of the RP itself. Typically, RPs are placed near the heart of the network, away from the network edge. With the exception of PIM-BiDir, the RP does very little multicast forwarding, unless it is directly in the flow of traffic. Consequently, protecting the data plane should be relatively easy to do by removing the RP from the main multicast data path.

If the network is large, or if there are many multicast streams, the RP can be taxed. Remember that when a new source starts transmitting in a PIM sparse-mode network, the packets will be encapsulated and sent using unicast messages to the RP. By default, a RP is therefore vulnerable to accepting registrations from inappropriate DRs for inappropriate groups, as well as to accepting more registrations than it may be able to handle in memory.

It is a good idea, to lock the RP down so that it accepts registrations only for groups that are part of an explicit list. Using an ACL, an accept-register list does just that. This security feature allows control over which sources and groups can register with the RP from the FHR. Table 5-16 lists the commands needed to lock out unwanted group registrations at the RP.

Image

Table 5-16 accept-register Commands for RPs

Aside from explicitly limiting the exact entries allowed to create state on the RP, you can also limit the rate at which state entries can be created. The multicast registration process can be taxing on the CPU of the designated router (DR) and the RP if the source is running at a high data rate or if there are many new sources starting at the same time. This scenario can potentially occur immediately after a network failover.

Limiting the registration rate protects the RP control plane from being overwhelmed during significant multicast state churn. Remember, that multicast state churn can be a consequence of something happening in both the multicast overlay or the underlying unicast network. Table 5-17 displays the commands used to limit the rate of RP registrations.

Image

Table 5-17 RP Group Register Rate-Limiting Commands

The number to which register packets should be limited depends largely on the number of potential sources registering at the same time and their data rates. Determining a baseline for multicast applications provides the administrator a general idea of what the communication characteristics look like. A typical setting in a PIM sparse-mode (PIM-SM) network is between 4 and 10 messages per second.

In an Auto-RP environment, you can use this feature in your design to filter certain RPs for certain multicast groups. This is needed if you have unique requirements to split the multicast groups aligned to each RP. This is achieved by using the command ip pim rp-announce-filter rp-list access-list group-list access-list. This is normally configured at the mapping agent for Auto-RP. Table 5-18 displays the commands used to configure RP announce filters for mapping agents.

Image

Table 5-18 RP Announce Filter Commands

Finally, the commands listed for IOS/XE and NX-OS in Table 5-18 allow an RP to filter RP group mapping announcements. This allows an administrator to control domain learning from the Auto-RP mapping agent. IOS-XR does not support this command because it is now assumed that domains will have sufficient controls in place, prior to mapping agent dynamics.

Best Practice and Security Summary

The last several sections provide many solutions to protect the multicast control plane and ultimately the integrity of the entire network. Determining the appropriate solution depends on the landscape of the network infrastructure, the capabilities of the organization managing the solution, and the risk you are willing to take. Table 5-19 provides an overview of the different solutions.

Image
Image

Table 5-19 Summary of Basic Multicast Deployment Recommendations

Putting It All Together

Let’s go back and take a look at our example company, Multicaster’s Bank Corp. Figure 5-17 illustrates the high-level topology for the Multicaster’s Bank Corp. network. This scenario is a good example to bring all the concepts about configuring PIM together.

Image

Figure 5-17 Multicaster’s Bank Corp. Network

Scenario: Multicaster’s Bank Corp. Media Services

Multicaster’s Bank Corp.’s marketing department has decided to make some upgrades to the way they manage their brand locally at the Los Angeles HQ, New York City HQ, and Boca Raton regional offices. The HQ campuses share a very similar design, and the company wishes to deploy localized digital signage to each campus, giving media control to the local marketing teams. They purchased a digital signage media engine for each HQ office. They have also purchased a new webcasting TV system for internal management communications to employees (IPTV) that will be used corporate-wide. As new products are announced, the IPTV service can update employees with important messages about product changes.

In this scenario, we work for the director of infrastructure. The marketing department has given her some high-level requirements about the two systems. In addition, she is concerned that adding a lot of unicast media traffic to the campus LAN could impact daily operations at all three buildings. Both the digital signage and IPTV servers support multicast, and she has asked us to look over the requirements and make the engineering and configuration recommendation to meet the requirements using multicast.

Requirements:

Image There will be 20 unique feeds of IPTV media so individual departments can select between different services or advertisements from the service.

Image Each of the HQ campuses should have the ability to support up to eight unique streams.

Image The initial media systems will not be built with redundancy until Phase Two. However, this is a business-critical application for marketing; thus, the infrastructure supporting both systems should be as dynamic, reliable, and as redundant as possible.

Image Because these systems are an integral part of the company’s branding, they should be made as secure as possible network-wide.

Image Each of the HQ digital signage systems should be separate from each other and no streams should appear at the Boca Raton office. There should be no chance that signage media can leak from one campus to another.

Image All workstations across the campus should be able to tap into the IPTV feed. The IPTV server will be located at the Boca Raton office, where the principal IT is located. The location of the IPTV server is shown at a high-level in Figure 5-18. Figure 5-19 shows the NYC HQ office network with the digital signage media server connected. The LA campus has an identical configuration to that of the NYC campus.


Note

Not all service providers offer the same services when it comes to multicast. One SP may not allow native multicast over the MPLS links, whereas another may. Some SPs may limit multicast traffic by default. It is important for enterprise customers to work with their service providers to determine which services are available, the fees that may apply (if any), and how services may affect a given design.


Image

Figure 5-18 Multicaster’s Boca Raton Office

Image

Figure 5-19 NYC Office

In addition to these requirements, our director has reminded us of a few technical details relevant to enabling multicast:

Image A single MPLS cloud, obtained through a national service provider, connects the three corporate campuses. Each of the three campuses has two routers, each with one link to the provider, as shown in Figure 5-20.

Image

Figure 5-20 Multicaster’s MPLS WAN Cloud

Image The MPLS service provider supports full PIM sparse-mode multicast over the MPLS backbone, and the provider uses Cisco edge routers. The six Multicaster’s routers each have an EIGRP neighbor relationship with the MPLS cloud.

Image The service provider has an enforced multicast state limit of 24 entries per customer. (This depends on the service-level agreement that the enterprise has with the service provider.) Any entries forwarded beyond this state limit will not be installed on the provider routers.

Image Because firewalls are not yet in the path of the media servers, network security has recommended that multicast filtering be used to ensure that group state can be formed where only appropriate and that IGMP be locked down on gateway routers.

Image The IPTV server and the two digital signage media servers only support IGMPv2.

Image Network configurations should always be identical (with the exception of IP addresses) between the two HQ campuses. This creates network operations consistency. To ease stream identification, however, each HQ campus should use different multicast group addresses between the eight streams in each campus.

Using this information, we should be able to derive an appropriate multicast design and configuration that will suit Multicaster’s needs. It is clear from the requirements that multiple overlapping domains will provide the separation needed between the local media engines and the IPTV service. There should be one corporate-wide domain that can encompass IPTV and reach all IP phones in the three campus networks. In addition, the digital signage streams will be best localized if a single overlapping domain is created for each HQ location (NYC and LA respectively). We will propose dividing the network into the three domains shown in Figure 5-21.

Image

Figure 5-21 Overlapping PIM Domains

Domains should be segmented and secured using appropriate filters at the edge of each domain. Because the servers do not support IGMPv3, we will not be able to use SSM; consequently, sparse-mode PIM is our best option. To achieve dynamic ASM state consistency and reliability, we should use a dynamic RP feature in combination with Anycast RP redundancy within each domain. Auto-RP will be a good choice to provide our dynamic RP-mapping mechanism.

All routers and switches fall within the larger corporate domain. The WAN routers R1 and R3 connecting to the MPLS cloud will provide the Anycast RP and Auto-RP candidate and mapping agent functions for the corporate domain. Figure 5-22 shows the high-level design for this configuration, with R1 and R3 acting as both Anycast peers and RP candidates. The primary loopback (Loopback0) interfaces on R1 and R3 will act as redundant mapping agents.

Image

Figure 5-22 Multicaster’s Global Domain

To support the local HQ domains, we will propose using Anycast RP and Auto-RP; ensuring network operations consistency. Because these domains are much smaller, we will consolidate RP-candidate and mapping agent functions onto the same devices. Campus Layer 3 core switches C3 and C4 will act as the RPs for these domains. The core switches should also be the boundary of the local domain. Because both the LA and NYC campuses have identical configurations, we can use the same design for both. Figure 5-23 shows the proposed high-level design for the NYC campus domain.

Image

Figure 5-23 NYC Office RP Configuration

Let us examine the configuration steps on the network routers and switches to enable PIM across the network.

All routers and Layer 3 core switches:

Step 1. Enable multicast routing globally using the ip multicast-routing command.

Step 2. Enable PIM on all interfaces in the multicast distribution path, including interfaces facing either sources or receivers.

Step 3. Enable PIM on all relevant loopback interfaces (this step is critical for future configuration steps).

Step 4. Enable ip pim auto-rp listener on all multicast-enabled routers.

Step 5. Ensure that PIM is disabled on all other interfaces not in the forwarding path, in particular on any interfaces that face external entities, including VPN or other tunnel interfaces, and on Internet-facing interfaces.

Step 6. If required by security policy, manually lock down PIM neighbor relationships on PIM interfaces using the ip pim neighbor filter command, especially on those interfaces connecting to the MPLS service provider.

Step 7. Use an ACL denying all PIM and all multicast for any of the interfaces identified in Step 5.

Step 8. Tune all unicast routing for fast convergence, if applicable to the infrastructure and application (not a requirement for multicast), and turn on multipath multicast when and where it is warranted.

All Layer 2 switches:

Step 1. Enable PIM on any Layer 3 interfaces facing sources or receivers, including switch virtual interface (SVI) interfaces.

Step 2. Ensure that IGMP snooping is enabled.

Step 3. Tune STP to ensure maximum convergence and forwarding efficiency throughout the Layer 2 domain.

When the network is up and PIM is communicating across the network, it is time to create the address schemas that will be used for each application and domain. Remember that the requirements ask for the same addresses to be used in both NYC and LA. Because both the IPTV and digital signage applications are private ASM, we will use the 239.0.0.0 group address block to assign the groups. Our addressing requirements are very small. We can allow the second octet in the block to indicate the scope of the domain (global or local), the third octet to indicate the application type, and the fourth octet to indicate the stream number. If we use .10 and .20 to represent global and local scopes respectively, while using .1 to represent the IPTV application and .2 the digital signage streams, the group schema would look like the one shown in Figure 5-24.

Image

Figure 5-24 Multicast Address Schema

Next, we need to create the RPs, segment the domains, and secure the borders. We also need to set up Auto-RP boundaries for the network using a scope. Let us start with the global, IPTV domain. The following configuration steps should be taken on routers R1–R4 to create the Anycast RPs and Auto-RP mappings required for all L3 devices.

Routers R1 and R3 (global RPs):

Step 1. Create an Anycast RP loopback interface with and identical address on each router. (In this case, we can use Loopback 100 with IP address 192.168.254.250, making sure the interface is PIM sparse-mode–enabled.)

Step 2. Establish Anycast MSDP peering between routers R1 and R3 using the main loopback (Looback0) interface as the peering sources. Loopback 0 should also be configured as the router-id.

Step 3. Configure the new interface as the Auto-RP candidate using a scope that covers the maximum breadth (in hop count) of the service provider network and Multicaster’s routers.

Step 4. Create and apply an accept-register filter that limits multicast group registrations to only those in the global group address schema (deny the NYC and LA local groups).

Step 5. Ensure that the new loopback (Loopback 100) is distributed into the EIGRP routing unicast routing domain.

All downstream routers:

Step 1. Configure Loopback0 on R2 and R4 as Auto-RP mapping agents using a scope that covers the maximum breadth of the service provider network and Multicaster’s routers.

Step 2. Configure all edge gateway routers with an IGMP accept-list limited to only those groups included in the group schema.

Step 3. Create and apply a multicast boundary list on every Layer 3 device/interface that makes up the edge of the multicast network, such as Internet-facing interfaces (not shown in the diagrams).

Step 4. Use ACLs on the service provider facing interfaces to limit IP multicast packets to only those groups specified in the schema.

In the preceding configuration, interface Loopback 0 is the primary EIGRP and router management loopback on each router. We used Interface Loopback 100 on R1 and R3 with address 192.168.254.250/32 as the Anycast RP address. A network statement is added to EIGRP to propagate this address to the rest of the network. Remember that the network routers will build the shared tree to the closest unicast RP, meaning the routers will perform a L3 recursive look-up on the RP address, and whichever path has the shortest distance will be used as the RPF path. Auto-RP also uses this loopback address as the Candidate RP, giving us a hybrid RP design.

We need to make similar configuration changes for the local HQ domains (Local RPs). Here we show only the NYC domain configuration steps for brevity.

Step 1. On NYC3 and NYC4 L3 switches, create a new loopback interface (in this case, we will use Loopback200 on each), enabling PIM sparse-mode and assigning IP address 172.30.254.245.

Step 2. Establish Anycast MSDP peering between the primary loopback (Loopback0) interfaces on both routers.

Step 3. Configure the new loopback, Loopback 200, as both Auto-RP candidate and Auto-RP mapping agent with scope 2 on NYC3 and NYC4.

Step 4. Boundary router: It is recommended to filter Auto-RP on the boundary routers to prevent local scope leakage. The TTL (Auto-RP scope) value should be one more than the one configured on the candidate RP or mapping agent. This configuration prevents announce and discovery messages from leaking from the local site.

We would repeat these steps exactly in the LA local domain. However, in this scenario, we want the LA and NYC local domains to be identical, including the group addresses. Keep in mind that the larger Layer 3 unicast network will end up having multiple routes for all the Anycast peers. What would happen if we use the same loopback IP addresses for the Anycast RPs in both domains?


Note

All Layer 3 devices will use the unicast RIB entry for the loopback to build the RPF neighbor relationship. EIGRP will calculate a path to each Anycast loopback, as well as a feasible successor for each. EIGRP will select the path with the lowest metric to place in the RIB. Because it is a very large unicast domain, EIGRP would have a path to all four Anycast RPs with the same IP address (C3 and C4 in both the NYC and LA domains). To further ensure domain isolation, it is recommended to use different IP addresses for the Anycast loopbacks in the NYC and LA domains (the local admin scoped domains); one address for LA and one address for NYC. An alternative would be to use distribute lists to control Layer 3 unicast updates, but many might consider this the more difficult option. To have complete control over the domain, use boundary best practices previously discussed.


When all of these configuration tasks are complete, the domains should be operational and isolated. Further security and domain configuration is likely warranted. We would use the preceding sections and other published configuration guides to derive the appropriate additional configurations for each PIM router in the network. Ultimately, we need to ensure the RPs are protected, and that the mapping process on all routers is limited to those groups explicitly configured. Other security measures should be based on an individual organization’s policy and therefore is not included in this scenario.

Summary

This chapter discussed the importance of creating a functional schema for IP multicast addressing, how this schema provides better control over security implementations and builds a foundation to easily manage applications by establishing location and application identity into the address of the multicast messages.

Design elements were considered on the proper placement and implementation strategies using active/active and active/standby, and the different solutions using Auto-RP, BSR, static, Anycast, and MSDP mesh groups.

We generally desire the traffic flow in a unicast network and the multicast overlay to be congruent, but there are additional considerations for multicast that need to be addressed when equal-cost multipath unicast routes are involved. These include load-sharing using multipath selection, static entries, and BGP.

Security has been a growing concern over the years, and increasing the footprint of the infrastructure by adding multicast capability adds to the challenge. Fortunately, several mechanisms are available to protect the control plane and data plane for multicast. Some of these include control-plane protection, multicast filtering, and scoping.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset