Multiple-Router Branch Sites

For sites with multiple routers, the configuration accommodates sites that have only downstream routers (Site 4) or multiple IWAN routers (Site 5). The multirouter site routing configuration uses mutual redistribution between the IGP and BGP so that it can accommodate both scenarios. Multisite routing design includes this logic:

Image Establish an IGP routing protocol for the LAN networks such as OSPF for the book’s sample scenario. The IGP should not be enabled on or advertise the DMVPN tunnel networks at the branch sites.

Image The BGP learned default route is advertised into the IGP routing protocol. This provides a default route for Internet traffic for any routers at the branch site.

Image Redistribute BGP routes into the IGP. In essence, only the BGP summary routes are redistributed into the IGP. During this process the routes are tagged as the first step of a loop prevention process. The command bgp redistribute-internal is required to redistribute the IBGP learned network prefixes into the IGP.

Image Selectively redistribute IGP routes into BGP. Any route in the IGP that was not tagged from an earlier step (therefore indicating that the route originated in the IGP) is redistributed into BGP.


Note

The DMVPN tunnel networks should not be redistributed into BGP at the branch routers.


There are two things to consider when using OSPF as the IGP and how it interacts with BGP:

Image OSPF does not redistribute OSPF external routes into BGP by default and requires the explicit route identification for this to happen. External routes can be selected in a route map or by adding them to the redistribute command under the BGP protocol.

Image The router must have a default route in the routing table to inject the 0.0.0.0/0 link-state advertisement (LSA) into the OSPF database. The default route is an external Type-2 LSA by default. (External routes are classified as Type-1 or Type-2 with a Type-1 route preferred over a Type-2 route.)

The command default-information originate [always] [metric metric-value] [metric-type type-value] advertises the default route into OSPF. In essence, this command redistributes the default route from one protocol into OSPF. The always optional keyword removes the requirement for the default route to be present on the advertising router. By default, BGP does not redistribute internal routes (routes learned via an IBGP peer) into an IGP protocol (OSPF) as a safety mechanism. The command bgp redistribute-internal allows IBGP routes to be redistributed into the IGP and is required for advertising the default route into OSPF in this scenario.

Example 4-35 displays the multirouter site configuration for R41, R51, and R52. OSPF uses a passive interface default to prevent an OSPF neighborship from forming across the DMVPN tunnel in case OSPF is accidentally enabled.

Example 4-35 Configuration for Downstream OSPF Routers


R41
router bgp 10
 address-family ipv4
  neighbor MPLS-HUB next-hop-self all
  neighbor INET-HUB next-hop-self all


R51
router bgp 10
 address-family ipv4
  neighbor MPLS-HUB next-hop-self all


R52
router bgp 10
 address-family ipv4
  neighbor INET-HUB next-hop-self all


R41, R51 and R52
router ospf 1
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
router bgp 10
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
!
route-map REDIST-OSPF-TO-BGP deny 15
 match ip address prefix-list TUNNEL-DMVPN
!
route-map REDIST-OSPF-TO-BGP permit 30
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2
!
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24


Example 4-36 demonstrates that OSPF is enabled only on the LAN and loopback interfaces, and that the OSPF networks were redistributed into BGP. Notice that the DMVPN networks were not redistributed into BGP.

Example 4-36 Verification of OSPF Interfaces and Route Advertisements into BGP


R51-Spoke# show ip ospf interface brief
Interface    PID   Area            IP Address/Mask    Cost  State Nbrs F/C
Lo0          1     0               10.5.0.51/32       1     LOOP  0/0
Gi1/0        1     0               10.5.5.51/24       1     DR    1/1
Gi0/3        1     0               10.5.12.51/24      1     DR    1/1


R51-Spoke# show bgp ipv4 unicast
BGP table version is 408, local router ID is 10.5.0.51
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 * i 0.0.0.0          192.168.100.21           1    100  50000 i
 *>i                  192.168.100.11           1    100  50000 i
 * i 10.3.0.31/32     192.168.100.21           0    100  50000 ?
 *>i                  192.168.100.11           0    100  50000 ?
 * i 10.3.3.0/24      192.168.100.21           0    100  50000 ?
 *>i                  192.168.100.11           0    100  50000 ?
 r>i 10.4.0.41/32     192.168.100.11           0    100  50000 ?
 r i                  192.168.100.21           0    100  50000 ?
 r>i 10.4.4.0/24      192.168.100.11           0    100  50000 ?
 r i                  192.168.100.21           0    100  50000 ?
 *>  10.5.0.51/32     0.0.0.0                  0         32768 ?
 *>  10.5.0.52/32     10.5.12.52               2         32768 ?
 *>  10.5.5.0/24      0.0.0.0                  0         32768 ?
 *>  10.5.12.0/24     0.0.0.0                  0         32768 ?


Example 4-37 verifies that the 10.3.3.0/24 network was redistributed from BGP into OSPF and the route tag of 1 was set to prevent the network prefix from being redistributed back into BGP.

Example 4-37 Verification of IGP Route Tagging to Prevent Routing Loops


R51-Spoke# show ip ospf database external 10.3.3.0
            OSPF Router with ID (10.5.0.51) (Process ID 1)
                Type-5 AS External Link States
  LS age: 528
  Options: (No TOS-capability, DC, Upward)
  LS Type: AS External Link
  Link State ID: 10.3.3.0 (External Network Number)
  Advertising Router: 10.5.0.51
  LS Seq Number: 800000D5
  Checksum: 0x1477
  Length: 36
  Network Mask: /24
        Metric Type: 2 (Larger than any link state path)
        MTID: 0
        Metric: 1
        Forward Address: 0.0.0.0
        External Route Tag: 1


Changing BGP Administrative Distance

A review of the BGP table on R51 in Example 4-36 indicates that there are BGP RIB failures (indicated by the ‘r’). The command show bgp afi safi rib-failure provides more detailed information on the RIB failure. This behavior happens because OSPF has a lower AD, 110, than a route learned from an IBGP peer, 200. This is confirmed in Example 4-38.

Example 4-38 Identifying the Reason for BGP RIB Failure


R51-Spoke# show bgp ipv4 unicast rib-failure
  Network            Next Hop                      RIB-failure   RIB-NH Matches
10.4.0.41/32       192.168.100.11      Higher admin distance              n/a
10.4.4.0/24        192.168.100.11      Higher admin distance              n/a


BGP differentiates between routes learned from IBGP peers, routes learned from EBGP peers, and routes learned locally. The AD needs to be modified on all the DMVPN routers to ensure that they will always prefer IBGP to OSPF. The AD for EBGP learned routes was elevated in case of route leaking to provide connectivity from other SP services so that IWAN learned routes always take precedence. The default AD values can be modified with the address family command distance bgp external-ad internal-ad local-routes to set the AD for each BGP network type.


Note

Locally learned routes are from aggregate (summary) or backdoor networks. Routes advertised via the network statement use the AD setting for IBGP routes.


Example 4-39 displays the configuration to modify the AD so that IBGP routes from the WAN are preferred over OSPF paths. This configuration is deployed to all the routers (branch and hub) to ensure a consistent routing policy.

Example 4-39 Modification of BGP Administrative Distance


R11, R12, R21, R22, R31, R41, R51 and R52
router bgp 10
 address-family ipv4 unicast
  distance bgp 201 19  19



Note

The changes to the BGP AD are not seen until the route is reintroduced to the RIB. The command clear ip route * forces the RIB to reload from all routing protocol databases.


Example 4-40 verifies that the AD has changed for BGP and is now 19 for IBGP-based routes. R12 now uses the BGP route learned via tunnel 200 to reach the 10.3.3.0/24 network. Now R12 redistributes the route into OSPF too. R13 now has two equal-cost paths to reach the 10.3.3.0/24 network.

Example 4-40 Verification of AD Change


R51-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
     Network          Next Hop            Metric LocPrf Weight Path
*>i 10.4.0.41/32     192.168.100.11           0    100  50000 ?
 * i                  192.168.100.21           0    100  50000 ?
 *>i 10.4.4.0/24      192.168.100.11           0    100  50000 ?
 * i                  192.168.100.21           0    100  50000 ?


R51-Spoke# show bgp ipv4 unicast rib-failure
  Network            Next Hop                      RIB-failure   RIB-NH Matches


R51-Spoke# show ip route bgp
! Output omitted for brevity
B*    0.0.0.0/0 [19/1] via 192.168.100.11, 00:13:57
      10.0.0.0/8 is variably subnetted, 10 subnets, 2 masks
B        10.3.0.31/32 [19/0] via 192.168.100.11, 00:13:57
B        10.3.3.0/24 [19/0] via 192.168.100.11, 00:13:57
B        10.4.0.41/32 [19/0] via 192.168.100.11, 00:13:57
B        10.4.4.0/24 [19/0] via 192.168.100.11, 00:13:57


Route Advertisement on DMVPN Hub Routers

The DMVPN hub routers play a critical role in route advertisement and are responsible for

Image Redistributing BGP network prefixes into the IGP (OSPF).

Image Reflecting branch site network prefixes to other branch sites.

Image Advertising network prefixes that reside in the local DC.

Image Advertising network prefixes that reside elsewhere in the organization.

Image Summarizing network prefixes where feasible to reduce the size of the routing table on the branches. NHRP can inject more specific routes where spoke-to-spoke DMVPN tunnels have been established.

Image Advertising a default route for a centralized Internet access model (this task was accomplished earlier in the BGP section).

The DMVPN hub router advertises the enterprise (LAN and WAN) network prefixes to the branch routers and the WAN prefixes to the headquarters LAN via redistribution. The router needs to be healthy to advertise network prefixes to the branches so that it can be an active transit to the headquarters LAN. The first check is to make sure that the WAN is healthy, which is easily accomplished with the branch router’s ability to establish a tunnel with the hub. It is equally important that the DMVPN hub router maintain LAN connectivity to avoid blackholing network traffic. PfR identifies the optimal path across the WAN transport but does not go deeper into the network infrastructure like the DC for end-to-end verification.

Network prefixes can be injected into BGP dynamically and then summarized before they are advertised to the branch sites. However, in some failure scenarios, it is possible that a branch site can still trigger the enterprise prefix summary route to be generated. PfR would see that the WAN transport to the impaired hub router is still viable and preferred, but traffic would blackhole at the DMVPN hub router.

The solution is to place floating static routes (pointed at Null0) for the enterprise prefixes that use object tracking. The appropriate BGP network statements match the floating static route, are advertised into BGP, and then are advertised to the branch sites. In the event that the route is present in an IGP routing table, the AD of the IGP will be lower than that of the floating static route.

The redistribution of BGP into OSPF is dynamic in nature and does not require any health checks. As any routes are removed from the BGP table, they are withdrawn from the OSPF link-state database (LSDB).

DMVPN Hub LAN Connectivity Health Check

The DMVPN hub assesses its own health by evaluating its connectivity to the LAN. It does this by confirming whether it can reach the following things:

Image The local site’s loopback address of the PfR MC, which is either a Hub MC or a Transit MC

Image The loopback address of the upstream WAN distribution switches, or, if there are no WAN distribution switches, the next-hop routers that lead toward the core network

All three loopback addresses are combined into a consolidated tracked object. Only when the hub router loses connectivity to the local MC and both upstream distribution switches is the router deemed unhealthy. At this time, the router withdraws the floating static routes, which then withdraw the routes from BGP. Multiple devices are used to check the health to prevent false positives when maintenance is performed on a network device.

The logic uses the track object feature, which allows multiple types of objects to be tracked. Specifically in this instance, the route to the loopback (a /32 network prefix) is tracked. If the route is present, it returns an up value, and if the route is not present in the RIB, it returns a down. The logic is based upon the hub router having these routes in its RIB, if it maintains connectivity to establish an OSPF neighbor adjacency. When all the loopback addresses have been removed from the RIB (presumably because of no OSPF neighborship), the tracked monitor reports a value of down.

Before the floating static routes are created, the tracking must be created using the following process:

Step 1. Create child tracked objects.

Every one of the individual routes (loopback addresses) needs to be configured as a child tracked object with the command track track-number ip route network subnet-mask reachability. Every loopback address has a unique number.

Step 2. Create a track list monitor for all the child objects.

The command track track-number list boolean or defines the master tracking entity.

Step 3. Link the child tracked objects.

The child tracked objects for each one of the loopback interfaces are added with the command object track-number. The tracked monitor reports a value of up when any of the loopback addresses that were learned from OSPF are in the RIB.

Example 4-41 demonstrates the configuration for creating the tracked objects so that a router can verify their health. The first tracked object is the local PfR MC. The second tracked object is the upstream router, and the third tracked object is the other DMVPN hub router at that site.

Example 4-41 Configuration to Check DMVPN Health with a LAN Network


R11
track 1 ip route 10.1.0.10 255.255.255.255 reachability
track 2 ip route 10.1.0.13 255.255.255.255 reachability
track 3 ip route 10.1.0.12 255.255.255.255 reachability


R12
track 1 ip route 10.1.0.10 255.255.255.255 reachability
track 2 ip route 10.1.0.13 255.255.255.255 reachability
track 3 ip route 10.1.0.11 255.255.255.255 reachability


R21
track 1 ip route 10.2.0.20 255.255.255.255 reachability
track 2 ip route 10.2.0.23 255.255.255.255 reachability
track 3 ip route 10.2.0.22 255.255.255.255 reachability


R22
track 1 ip route 10.2.0.20 255.255.255.255 reachability
track 2 ip route 10.2.0.23 255.255.255.255 reachability
track 3 ip route 10.2.0.21 255.255.255.255 reachability


R11, R12, R21, and R22
track 100 list boolean or
 object 1
 object 2
 object 3


The status of the health check can be acquired by using the command show track, as shown in Example 4-42. The output provides the object, what is being tracked, the number of changes, and the last state change. If the object is a child object, the output indicates the parent object that is using it.

Example 4-42 Verification of Object Tracking


R11-DC1-Hub1# show track
Track 1
  IP route 10.1.0.10 255.255.255.255 reachability
  Reachability is Up (OSPF)
    2 changes, last change 04:32:32
  First-hop interface is GigabitEthernet1/0
  Tracked by:
    Track List 100
Track 2
  IP route 10.1.0.13 255.255.255.255 reachability
  Reachability is Up (OSPF)
    3 changes, last change 04:31:47
  First-hop interface is GigabitEthernet1/0
  Tracked by:
    Track List 100
Track 3
  IP route 10.1.0.12 255.255.255.255 reachability
  Reachability is Up (OSPF)
    1 change, last change 04:35:17
  First-hop interface is GigabitEthernet0/3
  Tracked by:
    Track List 100
Track 100
  List boolean or
  Boolean OR is Up
    2 changes, last change 04:34:12
    object 1 Up
    object 2 Up
    object 3 Up


BGP Route Advertisement on Hub Routers

Now that the LAN health check has been configured for the hub router, it is time to create the static routes. The static routes are used to create an entry into the global RIB, which is required for the route to be installed into BGP when using network statements.

The static route uses the syntax ip route network subnet-mask outbound-interface [administrative distance] [track track-number]. The static route uses the Null0 interface to drop network packets. It is set with a high administrative distance (254) so that if the route is advertised by an IGP, the IGP path can be used. Otherwise, the static route always has a lower route and causes the router to drop traffic even if a valid route exists in an IGP. Because the static route is not always used if the route is present in an IGP, it is called a floating static route. The floating static route is linked to the hub health check by using the track option with the parent object. As long as the tracked object is in an up state, the static route attempts to install into the RIB.

A static route needs to be created for the following:

Image The enterprise prefix summary networks: These should include all the networks in use in the LAN and WAN. Generally, these are the networks in the RFC 1918 space (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16). If the organization has public IP addresses and uses them for internal connectivity, they should be added as well. For the sake of brevity, this book uses only the 10.0.0.0/8 range.

Image The local DC’s networks: This ensures that traffic is directed toward these hub routers instead of other hub routers.

After those static routes have been defined, the network prefixes need to be configured in BGP. The BGP configuration command network network mask subnet-mask notifies BGP to search the global RIB for that exact prefix. When a match is found, the route is installed into the BGP table for advertisement. The command needs to be run for the enterprise prefix summary networks, the local DC networks, the local PfR MCs loopback interface, and the default route (done earlier in this chapter).


Note

PfR does not require the MC loopback address in BGP, but this can be very helpful when troubleshooting. A static route for the local MC is not required as it is a /32 address and learned via IGP.


Example 4-43 displays the configuration of the floating static routes and the BGP network statements. Notice that the configuration is grouped by DC site location.

Example 4-43 Configuration of Floating Static Routes and BGP Network Statements


R11 and R12
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.1.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 address-family ipv4
  network 10.0.0.0 mask 255.0.0.0
  network 10.1.0.0 mask 255.255.0.0
  network 10.1.0.10 mask 255.255.255.255


R21 and R22
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.2.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 address-family ipv4
  network 10.0.0.0 mask 255.0.0.0
  network 10.2.0.0 mask 255.255.0.0
  network 10.2.0.20 mask 255.255.255.255


Example 4-44 verifies that all the routes that were advertised on the hub routers were received at the branch routers. Notice that the DC-specific prefixes are advertised out of the appropriate hub router.

Example 4-44 Verification of Routes Advertised into BGP


R31-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
    Network          Next Hop            Metric LocPrf Weight Path
 * i 0.0.0.0          192.168.200.12           1    100  50000 i
 * i                  192.168.200.22           1    100  50000 i
 * i                  192.168.100.21           1    100  50000 i
 *>i                  192.168.100.11           1    100  50000 i
 * i 10.0.0.0         192.168.200.12           0    100  50000 i
 * i                  192.168.200.22           0    100  50000 i
 * i                  192.168.100.21           0    100  50000 i
 *>i                  192.168.100.11           0    100  50000 i
 * i 10.1.0.0/16      192.168.200.12           0    100  50000 i
 *>i                  192.168.100.11           0    100  50000 i
 * i 10.1.0.10/32     192.168.200.12           3    100  50000 i
 *>i                  192.168.100.11           3    100  50000 i
 * i 10.2.0.0/16      192.168.200.22           0    100  50000 i
 *>i                  192.168.100.21           0    100  50000 i
 * i 10.2.0.20/32     192.168.200.22           3    100  50000 i
 *>i                  192.168.100.21           3    100  50000 i
 *>  10.3.0.31/32     0.0.0.0                  0         32768 ?
 *>  10.3.3.0/24      0.0.0.0                  0         32768 ?
 * i 10.4.0.41/32     192.168.200.22           0    100  50000 ?
 * i                  192.168.200.12           0    100  50000 ?
 * i                  192.168.100.21           0    100  50000 ?
 *>i                  192.168.100.11           0    100  50000 ?
 * i 10.4.4.0/24      192.168.200.22           0    100  50000 ?
 * i                  192.168.200.12           0    100  50000 ?
 * i                  192.168.100.21           0    100  50000 ?
 *>i                  192.168.100.11           0    100  50000 ?


BGP Route Filtering

The hub routers currently advertise all the routes that they have learned to other branches. Now that the BGP table has been injected with network summaries, the advertisement of network prefixes can be restricted to only the summary prefixes.

As a safety precaution, the same prefixes that the hub routers advertise to the branches should be prohibited from being received from a branch router. In addition to those prefixes, the hub routers should not accept the DMVPN tunnel networks.

The best approach is to create multiple prefix lists, so that a prefix list correlates with a certain function: default route, enterprise prefix, local DC segment, local MC, or DMVPN tunnel network.

Example 4-45 demonstrates the process for creating a prefix list for a specific function. An outbound route map is added to the branches that contains only the approved prefixes (default route, enterprise prefix list, DC-specific networks, and local MC). A BGP community can be added to the prefix to assist with additional routing logic at a later time (if desired). The inbound route map denies all the summary routes and the DMVPN tunnel networks.

Example 4-45 Configuration for Outbound and Inbound BGP Filtering


R11 and R12
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.1.0.10/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24


R21 and R22
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.2.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.2.0.20/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24


R11 and R21
router bgp 10
 address-family ipv4
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-IN in
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-OUT out
!
route-map BGP-MPLS-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-MPLS-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-MPLS-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-MPLS-SPOKES-OUT permit 40
match ip address prefix-list BGP-LOCALMC
!
! The first five sequences of the route-map deny network prefixes
! that can cause suboptimal routing or routing loops
route-map BGP-MPLS-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-MPLS-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-MPLS-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-MPLS-SPOKES-IN permit 60
 description Allow Everything Else


R12 and R22
router bgp 10
 address-family ipv4
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-IN in
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-OUT out
!
route-map BGP-INET-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-INET-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-INET-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-INET-SPOKES-OUT permit 40
match ip address prefix-list BGP-LOCALMC
!
! The first five sequences of the route-map deny network prefixes
! that can cause suboptimal routing or routing loops
route-map BGP-INET-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-INET-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-INET-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-INET-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-INET-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-INET-SPOKES-IN permit 60
description Allow Everything Else



Note

The number of sequence numbers in the route maps can be reduced by adding multiple conditional matches of the same type (prefix list); however, this also creates an SPOF if a sequence is accidentally deleted when making changes.


Example 4-46 confirms that only the appropriate routes are being advertised from the DMVPN hub routers. Filtering the other routes reduces the amount of memory needed to maintain the BGP table. NHRP injects more specific routes for spoke-to-spoke tunnels when they are established.

Example 4-46 Verification of Route Filtering on DMVPN Hub Routers


R31-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
     Network          Next Hop            Metric LocPrf Weight Path
 * i 0.0.0.0          192.168.200.12           1    100  50000 i
 * i                  192.168.200.22           1    100  50000 i
 * i                  192.168.100.21           1    100  50000 i
 *>i                  192.168.100.11           1    100  50000 i
 * i 10.0.0.0         192.168.200.12           0    100  50000 i
 * i                  192.168.200.22           0    100  50000 i
 * i                  192.168.100.21           0    100  50000 i
 *>i                  192.168.100.11           0    100  50000 i
 * i 10.1.0.0/16      192.168.200.12           0    100  50000 i
 *>i                  192.168.100.11           0    100  50000 i
 * i 10.1.0.10/32     192.168.200.12           3    100  50000 i
 *>i                  192.168.100.11           3    100  50000 i
 * i 10.2.0.0/16      192.168.200.22           0    100  50000 i
 *>i                  192.168.100.21           0    100  50000 i
 * i 10.2.0.20/32     192.168.200.22           3    100  50000 i
 *>i                  192.168.100.21           3    100  50000 i
 *>  10.3.0.31/32     0.0.0.0                  0         32768 ?
 *>  10.3.3.0/24      0.0.0.0                  0         32768 ?


Redistribution of BGP into OSPF

The last component of route advertisement on the hub routers is the process of redistributing routes from BGP into OSPF. The BGP configuration command bgp redistribute-internal is required because all the branch site routes were learned via an IBGP session.

A route map is required during redistribution to prevent the static network statements from being redistributed into OSPF. The route map can reuse the prefix lists that were used to filter routes.

Example 4-47 displays the configuration for R11, R12, R21, and R22 that redistributes the BGP network prefixes into OSPF. Notice that the first three sequences of the REDIST-BGP-TO-OSPF route map block the default route, enterprise summary, and local DC prefixes from being redistributed.

Example 4-47 BGP Route Advertisement into OSPF


R11, R12, R21, and R22
router bgp 10
 address-family ipv4 unicast
 bgp redistribute-internal
!
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
!
route-map REDIST-BGP-TO-OSPF deny 10
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map REDIST-BGP-TO-OSPF deny 20
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map REDIST-BGP-TO-OSPF deny 30
 match ip address prefix-list DEFAULT-ROUTE
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
set metric-type type-1


Example 4-48 verifies that the redistribution was successful and that the routes are populating appropriately in the headquarters LAN. Notice that R13 can take either path to reach the branch network sites.

Example 4-48 Verification of Branch Network Prefixes at the Headquarters LAN


R13# show ip route ospf
! Output omitted for brevity
O E1     10.3.0.31/32 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.3.3.0/24 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                     [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.4.0.41/32 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.4.4.0/24 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                     [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.5.0.51/32 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.5.0.52/32 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.5.5.0/24 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                     [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0
O E1     10.5.12.0/24 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0


Traffic Steering

An examination of Example 4-46 reveals that the BGP path attributes look the same for all the network prefixes, leaving the BGP best path nondeterministic. As explained earlier, the routing protocol design should accommodate the situation when PfR is in an uncontrolled state and direct traffic across the preferred transport.

Local preference is the second step in identifying the best path in BGP and can be set locally or remotely. Setting the BGP local preference on the hubs allows the routing policy to be set on all the branch devices.

R11 and R21 are the DMVPN hubs for the MPLS transport, which is the preferred transport. R12 and R22 are the DMVPN hubs for the Internet transport, which is the secondary transport. Branch routers should prefer Site 1 over Site 2 when PfR is in an uncontrolled state for establishing connectivity to other branches.

R11 advertises routes with a local preference of 100,000, R21 with a value of 20,000, R12 with a value of 3000, and R22 with a value of 400. All these values are above the default setting of 100 and easily show the first, second, third, and fourth order of preference.

Example 4-49 provides the necessary configuration to obtain the results described above. The local preference must be set for every sequence number.

Example 4-49 Hub Configuration to Set the BGP Path Preference on Branch Routers


R11
! This router should be selected first
route-map BGP-MPLS-SPOKES-OUT permit 10
 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 20
 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 30
 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 40
 set local-preference 100000


R12
! This router should be selected third
route-map BGP-INET-SPOKES-OUT permit 10
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 20
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 30
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 40
 set local-preference 3000


R21
! This router should be selected second
route-map BGP-MPLS-SPOKES-OUT permit 10
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 20
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 30
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 40
 set local-preference 20000


R22
! This router should be selected last
route-map BGP-INET-SPOKES-OUT permit 10
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 20
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 30
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 40
 set local-preference 400


Example 4-50 displays R31’s BGP table after making the changes on the hub routers. The path priorities are easy to identify with the technique shown in the preceding example. Notice that four paths are still shown for the default route and the 10.0.0.0/8 network. There are only two paths for the 10.1.0.0/16 and the 10.2.0.0/16 networks.

Example 4-50 BGP Table Demonstrating Path Preference


R31-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
     Network          Next Hop            Metric LocPrf Weight Path
 * i 0.0.0.0          192.168.200.22           1    400  50000 i
 * i                  192.168.100.21           1  20000  50000 i
 * i                  192.168.200.12           1   3000  50000 i
 *>i                  192.168.100.11           1 100000  50000 i
 * i 10.0.0.0         192.168.200.22           0    400  50000 i
 * i                  192.168.100.21           0  20000  50000 i
 * i                  192.168.200.12           0   3000  50000 i
 *>i                  192.168.100.11           0 100000  50000 i
 * i 10.1.0.0/16      192.168.200.12           0   3000  50000 i
 *>i                  192.168.100.11           0 100000  50000 i
 * i 10.2.0.0/16      192.168.200.22           0    400  50000 i
 *>i                  192.168.100.21           0  20000  50000 i


Ensuring that the network traffic takes a symmetric path (both directions) simplifies troubleshooting. Setting the local preference on the hub routers ensures the path taken from the branch routers but does not influence the return traffic. Example 4-51 provides R13’s routing table, which shows that traffic can go through R11 (MPLS) or through R12 (Internet) on the return path.

Example 4-51 R13 Path Preference


R13# show ip route ospf
! Output omitted for brevity
O E1     10.3.0.31/32 [110/2] via 10.1.112.12, 00:01:12, GigabitEthernet1/1
                      [110/2] via 10.1.111.11, 00:01:21, GigabitEthernet1/0


Setting a higher metric on the OSPF routes as they are redistributed on the Internet routers ensures that the paths are symmetric. Example 4-52 demonstrates the additional configuration to the existing route maps to influence path selection. OSPF prefers a lower-cost path to a higher-cost path.

Example 4-52 Modification to the Route Map to Influence Return Path Traffic


R11 and R21
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 1000


R12 and R22
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 2000


Example 4-53 verifies that the changes made to the route map have removed the asymmetric routing. Traffic leaving Site 1 will always take the path through R11 (MPLS).

Example 4-53 Verification of the Path Preference on Internal Routers


R13# show ip route ospf
! Output omitted for brevity
O E1     10.3.3.0/24 [110/1001] via 10.1.111.11, 00:00:09, GigabitEthernet1/0



Note

Ensuring that the traffic is symmetric (uses the same transport in both directions) helps with application classification and WAAS. Multirouter sites like Site 5 should use FHRPs like HSRP that use the primary router that has the primary transport.


Complete BGP Configuration

The preceding sections explained the logic for deploying BGP for the WAN (DMVPN overlay) with redistribution into OSPF at the hub routers (centralized sites). The components were explained in a step-by-step fashion to provide a thorough understanding of the configuration. Example 4-54 shows the complete routing configuration for the DMVPN hub routers.

Example 4-54 IBGP Hub Router Configuration


R11-Hub
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 0.0.0.0 255.255.255.255 area 0
!
track 1 ip route 10.1.0.10 255.255.255.255 reachability
track 2 ip route 10.1.0.13 255.255.255.255 reachability
track 3 ip route 10.1.0.12 255.255.255.255 reachability
track 100 list boolean or
 object 1
 object 2
 object 3
!
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.1.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 bgp router-id 10.1.0.11
 bgp listen range 192.168.100.0/24 peer-group MPLS-SPOKES
 bgp listen limit 254
 neighbor MPLS-SPOKES peer-group
 neighbor MPLS-SPOKES remote-as 10
 neighbor MPLS-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0
  network 10.0.0.0
  network 10.1.0.0 mask 255.255.0.0
  network 10.1.0.10 mask 255.255.255.255
  neighbor MPLS-SPOKES activate
  neighbor MPLS-SPOKES send-community
 neighbor MPLS-SPOKES route-reflector-client
  neighbor MPLS-SPOKES next-hop-self all
  neighbor MPLS-SPOKES weight 50000
  neighbor MPLS-SPOKES soft-reconfiguration inbound
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-IN in
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.1.0.10/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map BGP-MPLS-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX

 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 100000
route-map BGP-MPLS-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 100000
!
route-map BGP-MPLS-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-MPLS-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-MPLS-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-MPLS-SPOKES-IN permit 60
 description Allow Everything Else
!
route-map REDIST-BGP-TO-OSPF deny 10
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map REDIST-BGP-TO-OSPF deny 20
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map REDIST-BGP-TO-OSPF deny 30
 match ip address prefix-list DEFAULT-ROUTE
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 1000
 set metric-type type-1


R12-Hub
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 0.0.0.0 255.255.255.255 area 0
!
track 1 ip route 10.1.0.10 255.255.255.255 reachability
track 2 ip route 10.1.0.13 255.255.255.255 reachability
track 3 ip route 10.1.0.11 255.255.255.255 reachability
track 100 list boolean or
 object 1
 object 2
 object 3
!
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.1.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 bgp router-id 10.1.0.12
 bgp listen range 192.168.200.0/24 peer-group INET-SPOKES
 bgp listen limit 254
 neighbor INET-SPOKES peer-group
 neighbor INET-SPOKES remote-as 10
 neighbor INET-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0
  network 10.0.0.0
  network 10.1.0.0 mask 255.255.0.0
  network 10.1.0.10 mask 255.255.255.255
  neighbor INET-SPOKES activate
  neighbor INET-SPOKES send-community
  neighbor INET-SPOKES route-reflector-client
  neighbor INET-SPOKES next-hop-self all
  neighbor INET-SPOKES weight 50000
  neighbor INET-SPOKES soft-reconfiguration inbound
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-IN in
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.1.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.1.0.10/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map BGP-INET-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 3000
route-map BGP-INET-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 3000
!
route-map REDIST-BGP-TO-OSPF deny 10
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map REDIST-BGP-TO-OSPF deny 20
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map REDIST-BGP-TO-OSPF deny 30
 match ip address prefix-list DEFAULT-ROUTE
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 2000
 set metric-type type-1
!
route-map BGP-INET-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-INET-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-INET-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-INET-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-INET-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-INET-SPOKES-IN permit 60
 description Allow Everything Else


R21-Hub
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 0.0.0.0 255.255.255.255 area 0
!
track 1 ip route 10.2.0.20 255.255.255.255 reachability
track 2 ip route 10.2.0.23 255.255.255.255 reachability
track 3 ip route 10.2.0.22 255.255.255.255 reachability
track 100 list boolean or
 object 1
 object 2
 object 3
!
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.2.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 bgp router-id 10.2.0.21
 bgp listen range 192.168.100.0/24 peer-group MPLS-SPOKES
 bgp listen limit 254
 neighbor MPLS-SPOKES peer-group
 neighbor MPLS-SPOKES remote-as 10
 neighbor MPLS-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0
  network 10.0.0.0
  network 10.2.0.0 mask 255.255.0.0
  network 10.2.0.20 mask 255.255.255.255
  neighbor MPLS-SPOKES activate
  neighbor MPLS-SPOKES send-community
  neighbor MPLS-SPOKES route-reflector-client
  neighbor MPLS-SPOKES next-hop-self all
  neighbor MPLS-SPOKES weight 50000
  neighbor MPLS-SPOKES soft-reconfiguration inbound
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-IN in
  neighbor MPLS-SPOKES route-map BGP-MPLS-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.2.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.2.0.20/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map BGP-MPLS-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 20000
route-map BGP-MPLS-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 20000
!
route-map BGP-MPLS-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-MPLS-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-MPLS-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-MPLS-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-MPLS-SPOKES-IN permit 60
 description Allow Everything Else
!
route-map REDIST-BGP-TO-OSPF deny 10
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map REDIST-BGP-TO-OSPF deny 20
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map REDIST-BGP-TO-OSPF deny 30
 match ip address prefix-list DEFAULT-ROUTE
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 1000
 set metric-type type-1


R22-Hub
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 0.0.0.0 255.255.255.255 area 0
!
!
track 1 ip route 10.2.0.20 255.255.255.255 reachability
track 2 ip route 10.2.0.23 255.255.255.255 reachability
track 3 ip route 10.2.0.21 255.255.255.255 reachability
track 100 list boolean or
 object 1
 object 2
 object 3
!
ip route 10.0.0.0 255.0.0.0 Null0 254 track 100
ip route 10.2.0.0 255.255.0.0 Null0 254 track 100
!
router bgp 10
 bgp router-id 10.2.0.22
 bgp listen range 192.168.200.0/24 peer-group INET-SPOKES
 bgp listen limit 254
 neighbor INET-SPOKES peer-group
 neighbor INET-SPOKES remote-as 10
 neighbor INET-SPOKES timers 20 60
 !
 address-family ipv4
  bgp redistribute-internal
  network 0.0.0.0
  network 10.0.0.0
  network 10.2.0.0 mask 255.255.0.0
  network 10.2.0.20 mask 255.255.255.255
  neighbor INET-SPOKES activate
  neighbor INET-SPOKES send-community
  neighbor INET-SPOKES route-reflector-client
  neighbor INET-SPOKES next-hop-self all
  neighbor INET-SPOKES weight 50000
  neighbor INET-SPOKES soft-reconfiguration inbound
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-IN in
  neighbor INET-SPOKES route-map BGP-INET-SPOKES-OUT out
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list BGP-ENTERPRISE-PREFIX seq 10 permit 10.0.0.0/8
ip prefix-list BGP-LOCALDC-PREFIX seq 10 permit 10.2.0.0/16
ip prefix-list BGP-LOCALMC seq 10 permit 10.2.0.20/32
ip prefix-list DEFAULT-ROUTE seq 10 permit 0.0.0.0/0
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map BGP-INET-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 400
route-map BGP-INET-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 400
!
route-map REDIST-BGP-TO-OSPF deny 10
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map REDIST-BGP-TO-OSPF deny 20
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map REDIST-BGP-TO-OSPF deny 30
 match ip address prefix-list DEFAULT-ROUTE
route-map REDIST-BGP-TO-OSPF permit 40
 description Modify Metric to Prefer MPLS over Internet
 set metric 2000
 set metric-type type-1
!
route-map BGP-INET-SPOKES-IN deny 10
 match ip address prefix-list DEFAULT-ROUTE
route-map BGP-INET-SPOKES-IN deny 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
route-map BGP-INET-SPOKES-IN deny 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
route-map BGP-INET-SPOKES-IN deny 40
 match ip address prefix-list BGP-LOCALMC
route-map BGP-INET-SPOKES-IN deny 50
 match ip address prefix-list TUNNEL-DMVPN
route-map BGP-INET-SPOKES-IN permit 60
description Allow Everything Else


Example 4-55 provides the BGP configuration for the BGP spoke routers.

Example 4-55 IBGP Spoke Router Configuration


R31-Spoke (Directly Attached Sites Only)
router bgp 10
 neighbor MPLS-HUB peer-group
 neighbor MPLS-HUB remote-as 10
 neighbor MPLS-HUB timers 20 60
 neighbor INET-HUB peer-group
 neighbor INET-HUB remote-as 10
 neighbor INET-HUB timers 20 60
 neighbor 192.168.100.11 peer-group MPLS-HUB
 neighbor 192.168.100.21 peer-group MPLS-HUB
 neighbor 192.168.200.12 peer-group INET-HUB
 neighbor 192.168.200.22 peer-group INET-HUB
 !
 address-family ipv4
  redistribute connected route-map REDIST-CONNECTED-TO-BGP
  neighbor MPLS-HUB send-community
  neighbor MPLS-HUB next-hop-self all
  neighbor MPLS-HUB weight 50000
  neighbor MPLS-HUB soft-reconfiguration inbound
  neighbor INET-HUB send-community
  neighbor INET-HUB next-hop-self all
  neighbor INET-HUB weight 50000
  neighbor INET-HUB soft-reconfiguration inbound
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  distance bgp 201 19 19
 exit-address-family
!
route-map REDIST-CONNECTED-TO-BGP deny 10
 description Block redistribution of DMVPN Tunnel Interfaces
 match interface Tunnel100 Tunnel200
route-map REDIST-CONNECTED-TO-BGP permit 20
 description Redistribute all other prefixes


R41-Spoke (Multiple Routers – Downstream Only)
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
router bgp 10
 neighbor MPLS-HUB peer-group
 neighbor MPLS-HUB remote-as 10
 neighbor MPLS-HUB timers 20 60
 neighbor INET-HUB peer-group
 neighbor INET-HUB remote-as 10
 neighbor INET-HUB timers 20 60
 neighbor 192.168.100.11 peer-group MPLS-HUB
 neighbor 192.168.100.21 peer-group MPLS-HUB
 neighbor 192.168.200.12 peer-group INET-HUB
 neighbor 192.168.200.22 peer-group INET-HUB
 !
address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor MPLS-HUB send-community
  neighbor MPLS-HUB next-hop-self all
  neighbor MPLS-HUB weight 50000
  neighbor MPLS-HUB soft-reconfiguration inbound
  neighbor INET-HUB send-community
  neighbor INET-HUB next-hop-self all
  neighbor INET-HUB weight 50000
  neighbor INET-HUB soft-reconfiguration inbound
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
route-map REDIST-OSPF-TO-BGP deny 15
 match ip address prefix-list TUNNEL-DMVPN
route-map REDIST-OSPF-TO-BGP permit 30
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2


R51-Spoke (Multiple Routers – Multiple Transport)
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 10.0.0.0 0.255.255.255 area 0
 default-information originate
!
router bgp 10
 bgp log-neighbor-changes
 neighbor MPLS-HUB peer-group
 neighbor MPLS-HUB remote-as 10
 neighbor MPLS-HUB timers 20 60
 neighbor 192.168.100.11 peer-group MPLS-HUB
 neighbor 192.168.100.21 peer-group MPLS-HUB
 !
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor MPLS-HUB send-community
  neighbor MPLS-HUB weight 50000
  neighbor MPLS-HUB soft-reconfiguration inbound
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
route-map REDIST-OSPF-TO-BGP deny 15
 match ip address prefix-list TUNNEL-DMVPN
route-map REDIST-OSPF-TO-BGP permit 30
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2


R52-Spoke (Multiple Routers – Multiple Transport)
router ospf 1
 redistribute bgp 10 subnets route-map REDIST-BGP-TO-OSPF
 passive-interface default
 no passive-interface GigabitEthernet0/3
 no passive-interface GigabitEthernet1/0
 network 10.0.0.0 0.255.255.255 area 0
default-information originate
!
router bgp 10
 bgp log-neighbor-changes
 neighbor INET-HUB peer-group
 neighbor INET-HUB remote-as 10
 neighbor INET-HUB timers 20 60
 neighbor 192.168.200.12 peer-group INET-HUB
 neighbor 192.168.200.22 peer-group INET-HUB
 !
 address-family ipv4
  bgp redistribute-internal
  redistribute ospf 1 route-map REDIST-OSPF-TO-BGP
  neighbor INET-HUB send-community
  neighbor INET-HUB weight 50000
  neighbor INET-HUB soft-reconfiguration inbound
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  distance bgp 201 19 19
 exit-address-family
!
ip prefix-list TUNNEL-DMVPN seq 10 permit 192.168.100.0/24
ip prefix-list TUNNEL-DMVPN seq 20 permit 192.168.200.0/24
!
route-map REDIST-BGP-TO-OSPF permit 10
 description Set a route tag to identify routes redistributed from BGP
 set tag 1
!
route-map REDIST-OSPF-TO-BGP deny 10
 description Block all routes redistributed from BGP
 match tag 1
route-map REDIST-OSPF-TO-BGP deny 15
 match ip address prefix-list TUNNEL-DMVPN
route-map REDIST-OSPF-TO-BGP permit 30
 description Redistribute all other traffic
 match route-type internal
 match route-type external type-1
 match route-type external type-2


Advanced BGP Site Selection

For scenarios that require a spoke router to prefer one site over another site, the hub routers include a BGP community with all the network prefixes. The spoke routers then change their routing policy based upon the BGP community to override the local preference that was advertised with the network prefix.

Assume that a spoke router should prefer Site 2-MPLS (R21), then Site 2-Internet (R22), then Site 1-MPLS (R11), then Site 1-Internet (R12). Example 4-56 demonstrates how to set the BGP community on the outbound route map.

Example 4-56 Configuration for Setting BGP Communities on Prefix Advertisement


R11
route-map BGP-MPLS-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 100000
 set community 10:11
route-map BGP-MPLS-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 100000
 set community 10:11
route-map BGP-MPLS-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 100000
 set community 10:11
route-map BGP-MPLS-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 100000
 set community 10:11


R12
route-map BGP-INET-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 3000
 set community 10:12
route-map BGP-INET-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 3000
 set community 10:12
route-map BGP-INET-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 3000
 set community 10:12
route-map BGP-INET-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 3000
 set community 10:12


R21
route-map BGP-MPLS-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 20000
 set community 10:21
route-map BGP-MPLS-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 20000
 set community 10:21
route-map BGP-MPLS-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 20000
 set community 10:21
route-map BGP-MPLS-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 20000
 set community 10:21


R22
route-map BGP-INET-SPOKES-OUT permit 10
 match ip address prefix-list DEFAULT-ROUTE
 set local-preference 400
 set community 10:22
route-map BGP-INET-SPOKES-OUT permit 20
 match ip address prefix-list BGP-ENTERPRISE-PREFIX
 set local-preference 400
 set community 10:22
route-map BGP-INET-SPOKES-OUT permit 30
 match ip address prefix-list BGP-LOCALDC-PREFIX
 set local-preference 400
 set community 10:22
route-map BGP-INET-SPOKES-OUT permit 40
 match ip address prefix-list BGP-LOCALMC
 set local-preference 400
set community 10:22


In order to set a preferential site, the local preference is increased on the paths with the desired tag. A different local preference is used to differentiate between the MPLS and Internet transport. The route map contains four sequences:

1. Match routes that have the hub’s BGP community from the primary transport in the primary site. The local preference is set to 123,456, which exceeds the highest local preference for the primary transport.

2. Match routes that have the hub’s BGP community from the secondary transport in the primary site. The local preference is set to 23,456, which exceeds the highest local preference for the secondary transport.

3. Match routes that have the hub’s BGP community from the primary transport in the secondary site. The local preference is set to 3456.

4. Allow all other routes to pass.

Example 4-57 provides the configuration for preferring routes in Site 1 or Site 2.

Example 4-57 BGP Configuration for Hub Preference


Prefer Site1
router bgp 10
 address-family ipv4
  neighbor MPLS-HUB route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 in
  neighbor INET-HUB route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 in
!
route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 permit 10
 match community R11
 set local-preference 123456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 permit 20
 match community R12
 set local-preference 23456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 permit 30
 match community R12
 set local-preference 3456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE1 permit 40
!
ip community-list standard R11 permit 10:11
ip community-list standard R12 permit 10:12
ip community-list standard R21 permit 10:21


Prefer Site2
router bgp 10
 address-family ipv4
  neighbor MPLS-HUB route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 in
  neighbor INET-HUB route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 in
!
route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 permit 10
 match community R21
 set local-preference 123456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 permit 20
 match community R22
 set local-preference 23456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 permit 30
 match community R11
 set local-preference 3456
route-map BGP-DEFAULT-ROUTE-PREFER-SITE2 permit 40
!
ip community-list standard R11 permit 10:11
ip community-list standard R21 permit 10:21
ip community-list standard R22 permit 10:22



Note

Matching on BGP communities requires defining a standard community list.


Example 4-58 verifies that R21 (MPLS primary site) is identified as the best path. Examining the local preference in the BGP table verifies that the DMVPN hubs will be preferred in the desired order.

Example 4-58 Verification of BGP Path Preference


R31-Spoke# show ip route
! Output omitted for brevity
Gateway of last resort is 192.168.100.21 to network 0.0.0.0

B*    0.0.0.0/0 [19/1] via 192.168.100.21, 00:01:44


R31-Spoke# show bgp ipv4 unicast
! Output omitted for brevity
     Network          Next Hop            Metric LocPrf Weight Path
 * i 0.0.0.0          192.168.200.12           1   3000  50000 i
 * i                  192.168.200.22           1  23456  50000 i
 * i                  192.168.100.11           1   3456  50000 i
 *>i                  192.168.100.21           1 123456  50000 i


FVRF Transport Routing

In Chapter 3, a simple fully specified default static route was used in the FVRF to provide connectivity between the DMVPN encapsulating interfaces. Some scenarios may require additional routing configuration in the FVRF scenario.

These designs incorporate routing in the FVRF context. Most of the routing protocol configuration is exactly the same, with the exception that a VRF is associated in the routing protocol configuration.

The most common use case is that the SP needs to establish a BGP session with the CE device as part of its capability to monitor the circuit. In this situation, MBGP is used. The peering with the SP is established under the VRF address family specifically for FVRF with the command address-family ipv4 vrf vrf-name. This places the router into the VRF address family configuration submode where the neighbor session is defined, and the networks are advertised into the FVRF.


Note

A route distinguisher (RD) must be defined in the VRF configuration to enter into the VRF address family configuration mode. The command rd asn:rd-identifier configures the RD in the VRF definition.


If the IWAN design uses BGP, and the ASN for the routing architecture is different from what the SP uses, a change request needs to be submitted to the SP, which may take a long time to process. An alternative is to use the neighbor ip-address local-as sp-peering-asn no-prepend replace-as dual-as command, which keeps the original ASN from the BGP process but lets the SP peer as if the router were using the existing ASN. Example 4-59 demonstrates the configuration of R41 to peer with the SP BGP router (which expects it to be ASN 41).

Example 4-59 MBGP VRF Address Family Configuration for the FVRF Network


R41
vrf definition INET01
 rd 10:41
 address-family ipv4
 exit-address-family
!
router bgp 10
 address-family ipv4 vrf INET01
  redistribute connected
  neighbor 172.16.41.1 remote-as 65000
  neighbor 172.16.41.1 local-as 41 no-prepend replace-as dual-as
  neighbor 172.16.41.1 activate
 exit-address-family


Multicast Routing

Multicast communication is a technology that optimizes network bandwidth utilization and allows for one-to-many or many-to-many communication. Only one data packet is sent on a link as needed and is replicated as the data forks (splits) on a network device. IP multicast is much more efficient than multiple individual unicast streams or a broadcast stream that propagates everywhere.

Multicast packets are referred to as a stream that uses a special multicast group address. Multicast group addresses are in the IP address range from 224.0.0.0 to 239.255.255.255. Clients of the multicast stream are called receivers.


Note

Multicast routing is a significant topic unto itself. The focus of this section is to provide basic design guidelines for multicast in the IWAN architecture.


Multicast Distribution Trees

A multicast router creates a distribution tree to define the path for the stream to reach the receivers. Two types of multicast distribution trees are source shortest path trees and shared trees.

Source Trees

A source tree is a multicast tree where the root is the source of the tree and the branches form a distribution tree through the network all the way down to the receivers. When this tree is built, it uses the path calculated by the network from the leaves to the source of the tree. The leaves use the routing table to locate the source. This is the reason why it is also referred to as a shortest path tree (SPT). The forwarding state of the SPT uses the notation (S,G). S is the source of the multicast stream (server) and G is the multicast group address.

Shared Trees

A shared tree is a multicast tree where the root of the shared tree is a router designated as the rendezvous point (RP). Multicast traffic is forwarded down the shared tree according to the group address G to which the packets are addressed, regardless of the source address. The forwarding state on the shared tree uses the notation (*,G).

Rendezvous Points

An RP is a single common root placed at a chosen point on a shared distribution tree as described in the previous sections in this chapter. An RP can be either configured statically in each router or learned through a dynamic mechanism.

Protocol Independent Multicast (PIM)

A multicast routing protocol is necessary to route multicast traffic throughout the network so that routers can locate and request multicast streams from other routers. Protocol Independent Multicast (PIM) is a multicast routing protocol that routes multicast traffic between network segments. PIM uses any of the unicast routing protocols to identify the path between the source and receivers.

PIM sparse mode (PIM SM) uses the unicast routing table to perform reverse path forwarding (RPF) checks and does not care which routing protocol (including static routes) populates the unicast routing table; therefore, it is protocol independent. The RPF interface is the interface with the path selected by the unicast routing protocols toward the IP address of the source or the RP.

PIM sparse mode uses an explicit join model where upon receipt of an IGMP Join, the IGMP Join is converted into a PIM Join. The PIM Join is sent to the root of the tree, either the RP for shared trees or the router attached to the multicast source for an SPT tree.

Then the multicast stream transmits from the source to the RP and from the RP to the receiver’s router and finally to the receiver. This is a simplified view of how PIM SM achieves multicast forwarding.

Source Specific Multicast (SSM)

In earlier and traditional PIM sparse mode/dense mode (DM) networks, receivers use IGMPv2 Joins to signal that they would like to receive multicast traffic from a specific multicast group (G). The IGMPv2 Joins include only the multicast group G the receiver wants to join, but they do not specify the source (S) for the multicast traffic. Because the source is unknown to the receiver, the receiver can accept traffic from any source transmitting to the group. This type of multicast service model is known as Any Source Multicast (ASM).

One of the problems with ASM is that it is possible for a receiver to receive multicast traffic from different sources transmitting to the same group. Even though the application on the receivers typically can filter out the unwanted traffic, network bandwidth and resources are wasted.

PIM SSM provides granularity and allows clients to specify the source of a multicast stream. SSM operates in conjunction with IGMPv3 and requires IGMPv3 support on the multicast routers, the receiver where the application is running, and the application itself.

As one of the operating modes of PIM, IGMPv3 membership reports (joins) allow a receiver to specify the source S and the group G from which it would like to receive multicast traffic. Because the IGMPv3 join includes the (S,G), referred to as a channel in SSM, the designated router (DR) builds a source tree (SPT) by sending an (S,G) PIM Join directly to the source. SSM is source tree based, so RPs are not required.


Note

The Internet Assigned Number Authority (IANA) assigned the 232.0.0.0/8 multicast range to SSM for default use. SSM is allowed to use any other multicast group in the 224.0.0.0/4 multicast range as long as it is not reserved.


Unless explicitly stated, this chapter discusses multicast in the context of ASM.

Multicast Routing Table

The logic for multicast routing of traffic varies from that of unicast routing. A router forwards packets away from the source down the distribution tree. A router organizes the multicast forwarding table based on the reverse path (receivers to the root of the distribution tree).

To avoid routing loops, incoming packets are accepted only if the outgoing interface matches the interface pointed toward the source of the packet. The process of checking the inbound interface to the source of the packet is known as RPF check. The multicast routing table is used for RPF checking.


Note

If a multicast packet fails the RPF check, the multicast packet is dropped.


The unicast routing table is assembled from the unicast routing protocol databases. The multicast routing table is blank by default. In the event that a route cannot be matched in the multicast routing table, it is looked up in the unicast routing table.

IWAN Multicast Configuration

The following steps are used to configure multicast routing for all routers in the environment:

Step 1. Enable multicast support for NHRP.

NHRP provides a mapping service of the protocol (tunnel IP) address to the NBMA (that is, transport) address for unicast packets. The same capability is required for multicast traffic. DMVPN hub routers enable multicast NHRP support with the tunnel command ip nhrp map multicast dynamic.

On the DMVPN spoke routers, the multicast keyword is necessary to enable multicast NHRP functions when using the command ip nhrp nhs nhs-address nbma nbma-address multicast.

Step 2. Enable multicast routing.

Multicast routing is enabled with the global command ip multicast-routing.

Step 3. Enable PIM and IGMP.

PIM and IGMPv2 are enabled by entering the command ip pim sparse-mode on all participating LAN and DMVPN tunnel interfaces (including the ones facing the receivers). Enabling PIM SM on receiver-facing interfaces enables IGMPv2 by default.

Step 4. Configure an RP.

The RP is a control plane operation that should be placed in the core of the network or close to the multicast sources on a pair of routers. Proper multicast design provides multiple RPs for redundancy purposes. This book uses static RP assignment with an Anycast RP address that resides on two DMVPN hub routers.

The RP is statically assigned with the command ip pim rp-address ip-address.

Step 5. Enable PIM NBMA mode on hub DMVPN tunnel interfaces.

By default, all PIM messages are transmitted between the spoke and the hub. Spokes do not see each other’s PIM messages. This can cause problems when a router (R11) is forwarding multicast traffic to multiple spokes (R31, R41, and R51). This is fine as long as R31, R41, and R51 have multicast subscribers; they all build multicast trees across R11. At R11, all the trees converge to a single link.

When a spoke router (R31) stops the stream with a PIM Prune message to R11, R11 does not receive a PIM Prune override message from R41 or R51 and declares the tunnel free of any multicast receivers. R11 stops and prunes the stream state, leaving R41 and R51 without multicast traffic. Neither R41 nor R51 sees the PIM Prune message so that they can override it.

PIM NBMA mode treats every spoke connection as its own link, preventing scenarios like this from happening. PIM NBMA mode is enabled with the tunnel command ip pim nbma-mode.

Step 6. Disable PIM designated router functions on spoke routers.

The PIM designated router resides on a multi-access link and registers active sources with the RP. It is essential for the PIM DR to receive and send packets to all routers on the multi-access link. Considering that multicast traffic works only spoke to hub, it is essential that only the hubs be PIM DRs. A spoke router can be removed from the PIM DR election by setting its priority to zero on DMVPN tunnel interfaces with the command ip pim dr-priority 0.

Example 4-60 displays R11’s multicast configuration as a reference configuration for the other DMVPN hub routers. Notice that the LAN interface GigabitEthernet 1/0 has PIM enabled on it.

Example 4-60 R11 Multicast Configuration


R11-Hub
ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
interface Tunnel100
 description DMVPN Tunnel
 ip pim nbma-mode
 ip pim sparse-mode
 ip nhrp map multicast dynamic
interface GigabitEthernet0/1
 description MPLS Transport transport
interface GigabitEthernet0/3
 description Cross-Link to R12
 ip pim sparse-mode
interface GigabitEthernet1/0
 description LAN interface
 ip pim sparse-mode
!
ip pim rp-address 192.168.1.1


Example 4-61 displays R31’s multicast configuration as a reference configuration for the other DMVPN hub routers.

Example 4-61 R31 Multicast Configuration


R31
ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
interface Tunnel100
 description DMVPN Tunnel for MPLS transport
 ip pim sparse-mode
 ip pim dr-priority 0
 ip nhrp nhs 192.168.100.11 nbma 172.16.11.1 multicast
 ip nhrp nhs 192.168.100.21 nbma 172.16.21.1 multicast
interface Tunnel200
 description DMVPN Tunnel for Internet transport
 ip pim sparse-mode
 ip pim dr-priority 0
 ip nhrp nhs 192.168.200.12 nbma 100.64.12.1 multicast
 ip nhrp nhs 192.168.200.22 nbma 100.64.22.1 multicast
interface GigabitEthernet1/0
 description LAN interface
 ip pim sparse-mode
!
ip pim rp-address 192.168.1.1


Example 4-62 displays the PIM interfaces with the command show ip pim interface and verifies PIM neighbors with the command show ip pim neighbor. Notice that the spoke PIM neighbors have a DR priority of zero.

Example 4-62 Verification of PIM Interfaces and Neighbors


R11-Hub# show ip pim interface
Address          Interface            Ver/   Nbr    Query  DR         DR
                                      Mode   Count  Intvl  Prior
10.1.111.11      GigabitEthernet1/0   v2/S   1      30     1       10.1.111.11
192.168.100.11   Tunnel100            v2/S   3      30     1       192.168.100.11
10.1.12.11       GigabitEthernet0/3   v2/S   1      30     1       10.1.12.11
10.1.0.11        Loopback0            v2/S   0      30     1       10.1.0.11


R11-Hub# show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable,
      L - DR Load-balancing Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.111.13       GigabitEthernet1/0       18:24:55/00:01:20 v2    1 / S P G
10.1.12.12        GigabitEthernet0/3       18:24:15/00:01:20 v2    1 / S P G
192.168.100.51    Tunnel100                18:23:40/00:01:36 v2    0 / S P G
192.168.100.41    Tunnel100                18:23:42/00:01:21 v2    0 / S P G
192.168.100.31    Tunnel100                18:23:53/00:01:17 v2    0 / S P G


Hub-to-Spoke Multicast Stream

This section describes the behaviors of multicast traffic flowing from a hub to a spoke router. The most significant change in behavior with multicast traffic and DMVPN is that multicast traffic flows only between the devices that use the multicast NHRP map statements. This means that multicast traffic always travels through the hub and does not travel across a spoke-to-spoke tunnel.

Figure 4-8 displays a multicast server (10.1.1.1) transmitting a multicast video stream to the group address of 225.1.1.1. R31 has a client (10.3.3.3) that wants to watch the video stream.

Image

Figure 4-8 Hub-to-Spoke Multicast Stream

The following events correspond to Figure 4-8 when a receiver subscribes to a multicast stream:

1. The receiver (10.3.3.3) attached to R31 sends an IGMP Join for the group address 225.1.1.1.

2. R31 creates an entry in the multicast routing table for 225.1.1.1 and identifies the RP (R11) for this group. R31 then performs an RPF check for the RP’s address (192.168.1.1) and resolves 192.168.100.11 as the RPF neighbor. The PIM Join is sent on the shared tree to the PIM neighbor R11 (192.168.100.11) via the tunnel interface, where it is processed on R11.

3. R11 has shared tree (*, 225.1.1.1) and source tree (10.1.1.1, 225.1.1.1) entries in the multicast routing table, both of which are pruned because there are no multicast clients to the 225.1.1.1 stream at this time. R11 registers the PIM Join, removes the prune entries on the shared tree and source tree entries, and resets the z-flag (multicast tunnel). R11 sends a PIM Join to R13 for the 225.1.1.1 stream.

4. R13 removes the prune on its shared tree (*, 225.1.1.1) and starts to forward packets toward R11. R11 then forwards the packets to R31, which then forwards the packets to the 10.3.3.0/24 LAN segment.

5. The receiver displays the video stream on a PC.

6. As soon as R31 receives a multicast packet on the shared tree (*, 225.1.1.1), it attempts to optimize the multicast path because it is directly attached to a receiver. R31 sends a PIM Join message to the source tree (10.1.1.1, 225.1.1.1). In order to prevent packets from being sent from both streams, R31 sends a PIM Prune message for the shared tree. At this time, both trees use the DMVPN hub as the next hop and have the same RPF neighbor.


Note

The scenario described here assumes that there are no active multicast clients to the 225.1.1.1 stream. If there were other active clients (such as R51), the shared tree (*, 225.1.1.1) and source tree (10.1.1.1, 225.1.1.1) would not be pruned on R11. R11 would use the source tree for forwarding decisions. R31 would use the shared tree initially until optimized with the source tree.


The command show ip mroute [group-address] displays the multicast routing table as shown in Example 4-63. The first entry in the multicast routing table is for the shared path tree (*,225.1.1.1). The use of the asterisk (*) for the source placement indicates any source belonging to that group address. This entry represents the shared tree, which is the path on which multicast data arrives initially from a source. Notice that the source tree has an incoming interface identified and has the ‘T’ flag set, whereas the shared path tree does not have an incoming interface and has the ‘P’ flag set for pruning.

The second entry (10.1.1.1, 255.1.1.1) displays the source tree for the multicast stream 225.1.1.1 from the source of 10.1.1.1.

Example 4-63 R13’s Multicast Routing Table for 225.1.1.1


R13# show ip mroute 225.1.1.1
! Output omitted for brevity
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
      T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 225.1.1.1), 01:00:29/00:01:05, RP 192.168.1.1, flags: SPF
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.1.111.11
  Outgoing interface list: Null

(10.1.1.1, 225.1.1.1), 01:00:29/00:02:49, flags: FT
  Incoming interface: GigabitEthernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:01:49/00:02:50



Note

An active multicast stream has only one incoming interface and one or more outgoing interfaces.


Example 4-64 displays the multicast routing table for 225.1.1.1 on R31 and R11. Notice that R31 has the ‘C’ flag that indicates a receiver is directly attached to it.

Example 4-64 R11’s and R31’s Multicast Routing Table for 255.1.1.1


R31-Spoke# show ip mroute 225.1.1.1
! Output omitted for brevity

(*, 225.1.1.1), 00:11:46/stopped, RP 192.168.1.1, flags: SJC
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:08:47/00:02:32

(10.1.1.1, 225.1.1.1), 00:02:20/00:00:39, flags: JT
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:02:20/00:02:32


R11-Hub# show ip mroute 225.1.1.1
! Output omitted for brevity
(*, 225.1.1.1), 01:08:33/00:02:49, RP 192.168.1.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:07:32/00:02:49

(10.1.1.1, 225.1.1.1), 00:10:53/00:03:20, flags: TA
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.1.111.10
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:07:32/00:03:23


Spoke-to-Spoke Multicast Traffic

A router’s multicast forwarding behavior operates normally when the source is located behind the hub, but it causes issues when the source is located behind a spoke router, specifically when the receiver is behind a different spoke router and a spoke-to-spoke tunnel exists between the two routers.

Figure 4-9 displays a multicast server (10.4.4.4) transmitting a multicast video stream to the group address of 225.4.4.4. R31 has a client (10.3.3.3) that wants to watch the video stream.

Image

Figure 4-9 Spoke-to-Spoke Multicast Stream

The following events correspond to Figure 4-9 when a receiver subscribes to a multicast stream:

1. The receiver (10.3.3.3) attached to R31 sends an IGMP Join for the group address 225.4.4.4.

2. R31 creates an entry in the multicast routing table for 225.4.4.4 and identifies the RP (R11) for this group. R31 then performs an RPF check for the RP’s address (192.168.1.1) and resolves 192.168.100.11 as the RPF neighbor. The PIM Join is sent on the shared tree (*, 225.4.4.4) to the PIM neighbor R11 (192.168.100.11) via the tunnel interface, where it is processed on R11.

3. R11 has shared tree (*, 225.4.4.4) and source tree (10.4.4.4, 225.4.4.4) entries in the multicast routing table, both of which are pruned. R11 removes the prune entry on the shared tree and sends a PIM Join to R41 for the 225.4.4.4 stream.


Note

Just as in the previous scenario, there are no active multicast clients to 225.4.4.4. If there were other active clients (such as R13), the shared tree (*, 225.4.4.4) and source tree (10.4.4.4, 225.4.4.4) would not be pruned on R11. Step 4 would be skipped and R11 would forward packets toward R31 using the source tree for forwarding decisions.


4. R41 removes the prune on its shared tree and starts to forward packets toward R11, which are then forwarded to R31 and then forwarded to the 10.3.3.0/24 LAN segment.

5. The receiver connected to R31 displays the video stream on a PC.

6. As soon as R31 receives a multicast packet on the shared tree, it attempts to optimize the multicast path because a receiver is attached to it.

A spoke-to-spoke tunnel is established between R31 and R41, and NHRP injects a route for 10.4.4.0/24 with a next hop of 192.168.100.41 on R31.

R31 tries to send a PIM Join message to the source tree via the spoke-to-spoke tunnel. R31 and R41 cannot directly exchange PIM messages or become PIM neighbors with each other. At the same time that R31 tries to send a PIM Join message to the source tree, it sends a PIM Prune message for the shared tree to prevent duplicate packets.


Note

Multicast packets travel across the DMVPN network only where there is an explicit NHRP mapping that correlates to the spoke-to-hub configuration. PIM messages operate using the multicast group 224.0.0.13.


7. The PIM Prune message succeeds because it was sent through the hub router, and the PIM Join message fails because multicast traffic does not travel across spoke-to-spoke tunnels. R41 receives the PIM Prune message on the shared tree and stops sending multicast packets on the shared tree toward R11.

From R41’s perspective, R31 does not want to receive the multicast stream anymore and has stopped sending the multicast stream down the shared tree.

8. The receiver stops displaying the video stream on the PC.

Example 4-65 displays R31’s unicast routing table. Notice that the 10.4.4.0/24 network was installed by NHRP. A more explicit route for the 10.4.4.4 source does not exist in the multicast routing table, so the NHRP entry from the unicast routing table is used. Notice that the 10.4.4.0/24 entry indicates the spoke-to-spoke tunnel for multicast traffic to and from R41.

Example 4-65 R31’s Routing Table and DMVPN Tunnels


R31-Spoke# show ip route
! Output omitted for brevity
B*    0.0.0.0/0 [19/1] via 192.168.100.11, 00:10:10
      10.0.0.0/8 is variably subnetted, 7 subnets, 4 masks
B        10.0.0.0/8 [19/0] via 192.168.100.11, 00:10:10
B        10.1.0.0/16 [19/0] via 192.168.100.11, 00:10:10
B        10.2.0.0/16 [19/0] via 192.168.100.21, 00:10:10
C        10.3.0.31/32 is directly connected, Loopback0
C        10.3.3.0/24 is directly connected, GigabitEthernet1/0
H        10.4.4.0/24 [250/255] via 192.168.100.41, 00:03:44, Tunnel100


R31-Spoke# show dmvpn detail
! Output omitted for brevity
# Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb    Target Network
----- --------------- --------------- ----- -------- ----- -----------------
    1 172.16.31.1      192.168.100.31   IKE 00:04:51   DLX        10.3.3.0/24
    2 172.16.41.1      192.168.100.41    UP 00:04:51   DT1        10.4.4.0/24
      172.16.41.1      192.168.100.41    UP 00:04:51   DT1  192.168.100.41/32
    1 172.16.11.1      192.168.100.11    UP 00:11:32     S  192.168.100.11/32
    1 172.16.21.1      192.168.100.21    UP 00:11:32     S  192.168.100.21/32


Example 4-66 displays R31’s multicast routing table for the 225.4.4.4 group. Notice the difference in RPF neighbors on the shared tree versus the source tree.

Example 4-66 R31’s Multicast Routing Table for the 225.4.4.4 Group


R31-Spoke# show ip mroute 225.4.4.4
! Output omitted for brevity
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

 (*, 225.4.4.4), 00:02:35/stopped, RP 192.168.1.1, flags: SJC
 Incoming interface: Tunnel100, RPF nbr 192.168.100.11
 Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:02:35/00:02:38

(10.4.4.4, 225.4.4.4), 00:02:35/00:00:24, flags: JT
  Incoming interface: Tunnel100, RPF nbr 192.168.100.41
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:02:35/00:02:38


The problem occurs because PIM tries to optimize the multicast traffic flow over the shortest path, which is a source tree. Routes in the multicast routing table that have the ‘T’ flag set indicate that PIM has tried to move the stream from a shared path tree to a source tree.

There are two solutions to the problem:

Image Change the SPT threshold: Cisco routers try to initiate a changeover to the source tree upon receipt of the first packet. This behavior can be disabled so that the router never tries to switch to a source tree.

Image Modify the multicast routing table: Creating a multicast route for the source’s IP address allows a different next-hop IP address to be used for multicast traffic versus unicast network traffic. The route needs to be as specific as the NHRP route that is injected for the source LAN network.

Modify the SPT Threshold

Disabling PIM’s ability to switch over from a shared tree to a source tree forces multicast traffic to always flow through the RP. Because the RP is placed behind the DMVPN hub router, multicast traffic never tries to flow across the spoke-to-spoke DMVPN tunnel.

The command ip pim spt-threshold infinity ensures that the source-based distribution tree is never used. The configuration needs to be placed on all routers in the remote LANs that have clients attached, as shown in Example 4-67. If the command is missed on a router with an attached receiver, the DMVPN spoke router tries to use a source-based tree and stop traffic for all routers behind that spoke router.

Example 4-67 Disabling the SPT Threshold Configuration


R31, R41, R51 and R52
ip pim spt-threshold infinity


Example 4-68 displays the multicast routing table for R31, R11, and R41 for the 225.4.4.4 group address. Only the shared tree exists on R31. R41 transmits out of both distribution trees. R11 receives packets on the source tree but transmits from the source and shared distribution trees.

Example 4-68 Shared Tree Routing Table for the 255.4.4.4 Stream


R31-Spoke# show ip mroute 225.4.4.4
! Output omitted for brevity
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       Z - Multicast Tunnel, z - MDT-data group sender,
IP Multicast Routing Table
 (*, 225.4.4.4), 00:00:31/00:02:28, RP 192.168.1.1, flags: SC
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:00:31/00:02:28


R11-Hub# show ip mroute 225.4.4.4
! Output omitted for brevity
(*, 225.4.4.4), 00:00:55/00:02:56, RP 192.168.1.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:00:33/00:02:56

(10.4.4.4, 225.4.4.4), 00:00:55/00:02:04, flags: TA
  Incoming interface: Tunnel100, RPF nbr 192.168.100.41
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:00:33/00:02:56


R41-Spoke# show ip mroute 225.4.4.4
! Output omitted for brevity
(*, 225.4.4.4), 00:01:57/stopped, RP 192.168.1.1, flags: SPF
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list: Null

(10.4.4.4, 225.4.4.4), 00:01:57/00:01:02, flags: FT
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.4.4.4
  Outgoing interface list:
    Tunnel100, Forward/Sparse, 00:01:34/00:02:31


The pitfalls of disabling the SPT threshold are the following:

Image It is not applicable to SSM.

Image It must be configured on all multicast-enabled routers on spoke LANs. It is the last-hop router that tries to join the source tree.

Image It applies to all multicast traffic and is not selective based on the stream.

Image It prevents the creation of (S,G) entries, which reduces the granularity of show commands for troubleshooting.

Modify the Multicast Routing Table

The other solution is to modify the multicast routing table so that the multicast stream’s source network is reached via the DMVPN hub. Modifying the multicast routing table does not impact the traffic flow of the unicast routing table. The route for the source LAN address must be added to the multicast routing table.

Static multicast routes can be placed on all the spoke routers, but it is not a scalable solution. The best solution is to use the multicast address family of BGP. Establishing a BGP session was explained earlier; the only difference is that the address-family ipv4 multicast command initializes the multicast address family.

The hub routers act as a route reflector for the spoke routers. Just as in unicast routing, the spoke and hub routers set the next hop for all multicast traffic to ensure that the hub’s IP address is the next-hop address. Last, the spoke router that hosts the source advertises the source’s LAN network into BGP.

Example 4-69 displays the multicast BGP configuration for the hub routers.

Example 4-69 Hub Multicast BGP Configuration


R11 and R21
router bgp 10
 address-family ipv4 multicast
  neighbor MPLS-SPOKES activate
  neighbor MPLS-SPOKES next-hop-self all
  neighbor MPLS-SPOKES route-reflector-client  


R12 and R22
router bgp 10
 address-family ipv4 multicast
  neighbor INET-SPOKES activate
  neighbor INET-SPOKES next-hop-self all
  neighbor INET-SPOKES route-reflector-client


Example 4-70 displays the multicast configuration for the spoke routers.

Example 4-70 Spoke Multicast BGP Configuration


R31, R41 and R51
router bgp 10
 address-family ipv4 multicast
  neighbor 192.168.100.11 activate
  neighbor 192.168.100.21 activate
  neighbor MPLS-SPOKES next-hop-self all


R31, R41 and R52
router bgp 10
 address-family ipv4 multicast
  neighbor 192.168.200.12 activate
  neighbor 192.168.200.22 activate
  neighbor INET-SPOKES next-hop-self all  


Example 4-71 displays R41’s multicast BGP advertisement of the 10.4.4.0/24 network.

Example 4-71 R41’s Advertisement of the 10.4.4.0/24 Network in the Multicast BGP Table


R41
router bgp 10
 address-family ipv4 multicast
  network 10.4.4.0 mask 255.255.255.0


Example 4-72 verifies that all four hub routers are advertising the 10.4.4.0/24 network into the multicast BGP table.

Example 4-72 Verification of the Multicast BGP Table


R31-Spoke# show bgp ipv4 multicast
BGP table version is 21, local router ID is 10.3.0.31
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
              x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

     Network          Next Hop            Metric LocPrf Weight Path
 * i 10.4.4.0/24      192.168.200.22           0    100      0 i
 * i                  192.168.200.12           0    100      0 i
 * i                  192.168.100.21           0    100      0 i
 *>i                  192.168.100.11           0    100      0 i


Example 4-73 displays the multicast routing table for R11 and R41. Notice the MBGP indication in the multicast routing table entry for 225.4.4.4.

Example 4-73 225.4.4.4 Multicast Routing Table After Multicast BGP


R31-Spoke# show ip mroute 225.4.4.4
! Output omitted for brevity
 (*, 225.4.4.4), 00:49:28/stopped, RP 192.168.1.1, flags: SJC
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:49:28/00:02:33

(10.4.4.4, 225.4.4.4), 00:24:23/00:02:55, flags: JT
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11, Mbgp
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:24:23/00:02:33


R11-Hub# show ip mroute 225.4.4.4
! Output omitted for brevity
(*, 225.4.4.4), 00:48:43/00:03:23, RP 192.168.1.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:13:55/00:03:23

(10.4.4.4, 225.4.4.4), 00:48:43/00:02:45, flags: TA
  Incoming interface: Tunnel100, RPF nbr 192.168.100.41, Mbgp
  Outgoing interface list:
    Tunnel100, 192.168.100.31, Forward/Sparse, 00:10:44/00:03:23


Example 4-74 verifies that the advertisement in BGP has no impact on the multicast routing entry for the server connected to R13 on the 10.1.1.0/24 network.

Example 4-74 225.1.1.1 Multicast Routing Table After Multicast BGP


R31-Spoke# show ip mroute 225.1.1.1
! Output omitted for brevity
(*, 225.1.1.1), 00:00:16/stopped, RP 192.168.1.1, flags: SJC
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:00:16/00:02:43

(10.1.1.1, 225.1.1.1), 00:00:16/00:02:43, flags: JT
  Incoming interface: Tunnel100, RPF nbr 192.168.100.11
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:00:16/00:02:43


Summary

This chapter focused on the routing protocol design principles and their deployment for EIGRP or BGP to exchange routes across the Intelligent WAN. These are the key concepts of a successful IWAN routing design:

Image Branch sites should not re-advertise routes learned from one hub to another hub. This prevents transit routing at the branches and keeps traffic flows deterministic.

Image Hub routers should advertise summary routes to branch routers to reduce the size of the routing table. This includes a default route for Internet connectivity, enterprise prefixes (includes all branch site locations and LAN networks in the enterprise), DC-specific prefixes, and the optional local PfR MC loopback (to simplify troubleshooting).

Image Hub routers always prefer routes learned from the branch router’s tunnel interface that is attached to the same transport as the hub router.

Image Network traffic should be steered to use the preferred transport through manipulation of the routing protocol’s best-path calculation. This provides optimal flow while PfR is in an uncontrolled state.

Image The protocol configuration should keep variables to a minimum so that the configuration can be deployed via network management tools like Cisco Prime Infrastructure.

Further Reading

To keep the size of the book small, this chapter does not go into explicit detail about routing protocol behaviors, advanced filtering techniques, or multicast routing. Deploying and maintaining an IWAN environment requires an understanding of these concepts depending on your environment’s needs. The book IP Routing on Cisco IOS, IOS XE, and IOS XR that is listed here provides a thorough reference to the concepts covered in this chapter.

Bates, T., and R. Chandra. RFC 1966, “BGP Route Reflection: An Alternative to Full Mesh IBGP.” IETF, June 1996. http://tools.ietf.org/html/rfc1966.

Cisco. “Cisco IOS Software Configuration Guides.” www.cisco.com.

Cisco. “Understanding the Basics of RPF Checking.” www.cisco.com.

Edgeworth, Brad, Aaron Foss, and Ramiro Garza Rios. IP Routing on Cisco IOS, IOS XE, and IOS XR. Indianapolis: Cisco Press, 2014.

Rekhter, Y., T. Li, and S. Hares. RFC 4271. “A Border Gateway Protocol 4 (BGP-4).” IETF, January 2006. http://tools.ietf.org/html/rfc4271.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset