Chapter 4
WAN Technologies


THE FOLLOWING CCNA ROUTING AND SWITCHING EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER:

In this chapter, I will cover wide area network (WAN) technologies. This will include a discussion of the multilink Point-to-Point protocol interfaces, PPP over Ethernet (PPoE), and Generic Routing Encapsulation tunnels. I will also cover the configuration of Border Gateway Protocol (BGP).

Configure and verify PPP and MLPPP on WAN interfaces using local authentication

Point-to-Point Protocol (PPP) is a Data Link layer protocol that can be used over either asynchronous serial (dial-up) or synchronous serial (ISDN) media. It relies on Link Control Protocol (LCP) to build and maintain data-link connections. Network Control Protocol (NCP) enables multiple Network layer protocols (routed protocols) to be used on a point-to-point connection.

Because HDLC is the default serial encapsulation on Cisco serial links and it works great, why in the world would you choose to use PPP? Well, the basic purpose of PPP is to transport layer 3 packets across a Data Link layer point-to-point link, and it's nonproprietary. So unless you have all Cisco routers, you need PPP on your serial interfaces because the HDLC encapsulation is Cisco proprietary, remember? Plus, since PPP can encapsulate several layer 3 routed protocols and provide authentication, dynamic addressing, and callback, PPP could actually be the best encapsulation solution for you over HDLC anyway.

Figure 4.1 shows the PPP protocol stack compared to the OSI reference model.

PPP contains four main components:

EIA/TIA-232-C, V.24, V.35, and ISDN A Physical layer international standard for serial communication

HDLC A method for encapsulating datagrams over serial links

LCP A method of establishing, configuring, maintaining, and terminating the point-to-point connection. It also provides features such as authentication. I'll give you a complete list of these features in the next section.

NCP NCP is a method of establishing and configuring different Network layer protocols for transport across the PPP link. NCP is designed to allow the simultaneous use of multiple Network layer protocols. Two examples of protocols here are Internet Protocol Control Protocol (IPCP) and Cisco Discovery Protocol Control Protocol (CDPCP).

Burn it into your mind that the PPP protocol stack is specified at the Physical and Data Link layers only. NCP is used to allow communication of multiple Network layer protocols by identifying and encapsulating the protocols across a PPP data link.

Figure shows Point-to-Point protocol stack with its main components: Physical layer, high level data link control, link control protocol, and network control protocol.
Figure 4.1 Point-to-Point Protocol stack

Next, we'll cover the options for LCP and PPP session establishment.

Link Control Protocol (LCP) Configuration Options

Link Control Protocol (LCP) offers different PPP encapsulation options, including the following:

Authentication This option tells the calling side of the link to send information that can identify the user. The two methods for this task are PAP and CHAP.

Compression This is used to increase the throughput of PPP connections by compressing the data or payload prior to transmission. PPP decompresses the data frame on the receiving end.

Error detection PPP uses Quality and Magic Number options to ensure a reliable, loop-free data link.

Multilink PPP (MLP) Starting with IOS version 11.1, multilink is supported on PPP links with Cisco routers. This option makes several separate physical paths appear to be one logical path at layer 3. For example, two T1s running multilink PPP would show up as a single 3 Mbps path to a layer 3 routing protocol.

PPP callback On a dial-up connection, PPP can be configured to call back after successful authentication. PPP callback can be a very good thing because it allows us to keep track of usage based upon access charges for accounting records and a bunch of other reasons. With callback enabled, a calling router (client) will contact a remote router (server) and authenticate. Predictably, both routers have to be configured for the callback feature for this to work. Once authentication is completed, the remote router will terminate the connection and then reinitiate a connection to the calling router from the remote router.

PPP Session Establishment

When PPP connections are started, the links go through three phases of session establishment, as shown in Figure 4.2:

Illustration shows link establishment phase, authentication phase, and network layer protocol phase in point-to-point protocol session establishment.
Figure 4.2 PPP session establishment

Link-establishment phase LCP packets are sent by each PPP device to configure and test the link. These packets contain a field called Configuration Option that allows each device to see the size of the data, the compression, and authentication. If no Configuration Option field is present, then the default configurations will be used.

Authentication phase If required, either CHAP or PAP can be used to authenticate a link. Authentication takes place before Network layer protocol information is read, and it's also possible that link-quality determination will occur simultaneously.

Network layer protocol phase PPP uses the Network Control Protocol (NCP) to allow multiple Network layer protocols to be encapsulated and sent over a PPP data link. Each Network layer protocol (e.g., IP, IPv6, which are routed protocols) establishes a service with NCP.

PPP Authentication Methods

There are two methods of authentication that can be used with PPP links:

Password Authentication Protocol (PAP) The Password Authentication Protocol (PAP) is the less secure of the two methods. Passwords are sent in clear text and PAP is performed only upon the initial link establishment. When the PPP link is first established, the remote node sends the username and password back to the originating target router until authentication is acknowledged. Not exactly Fort Knox!

Challenge Handshake Authentication Protocol (CHAP) The Challenge Handshake Authentication Protocol (CHAP) is used at the initial startup of a link and at periodic checkups on the link to ensure that the router is still communicating with the same host.

After PPP finishes its initial link-establishment phase, the local router sends a challenge request to the remote device. The remote device sends a value calculated using a one-way hash function called MD5. The local router checks this hash value to make sure it matches. If the values don't match their own locally calculated MD5 hash, the link is immediately terminated.

Configuring PPP on Cisco Routers

Configuring PPP encapsulation on an interface is really pretty straightforward. To configure it from the CLI, use these simple router commands:

Router#config t

Router(config)#int s0

Router(config-if)#encapsulation ppp

Router(config-if)#^Z

Router#

Of course, PPP encapsulation has to be enabled on both interfaces connected to a serial line in order to work, and there are several additional configuration options available to you via the ppp ? command.

Configuring PPP Authentication

After you configure your serial interface to support PPP encapsulation, you can then configure authentication using PPP between routers. But first, you must set the hostname of the router if it hasn't been set already. After that, you set the username and password for the remote router that will be connecting to your router, like this:

Router#config t

Router(config)#hostname RouterA

RouterA(config)#username RouterB password cisco

When using the username command, remember that the username is the hostname of the remote router that's connecting to your router. And it's case sensitive too. Also, the password on both routers must be the same. It's a plain-text password that you can see with a show run command, and you can encrypt the password by using the command ­service password-encryption. You must have a username and password configured for each remote system you plan to connect to. The remote routers must also be similarly configured with usernames and passwords.

Now, after you've set the hostname, usernames, and passwords, choose either CHAP or PAP as the authentication method:

RouterA#config t

RouterA(config)#int s0

RouterA(config-if)#ppp authentication chap pap

RouterA(config-if)#^Z

RouterA#

If both methods are configured on the same line as I've demonstrated here, then only the first method will be used during link negotiation. The second acts as a backup just in case the first method fails.

There is yet another command you can use if you're using PAP authentication for some reason. The ppp pap sent-username username password password command enables outbound PAP authentication. The local router uses the username and password that the ppp pap sent-username command specifies to authenticate itself to a remote device. The other router must have this same username/password configured as well.

Verifying and Troubleshooting Serial Links

Now that PPP encapsulation is enabled, you need to verify that it's up and running. First, let's take a look at a figure of a sample nonproduction network serial link. Figure 4.3 shows two routers connected with a point-to-point serial connection, with the DCE side on the Pod1R1 router.

Illustration shows two routers connected to each other via point-to-point serial connection.
Figure 4.3 PPP authentication example

You can start verifying the configuration with the show interface command like this:

Pod1R1#sh int s0/0

Serial0/0 is up, line protocol is up

  Hardware is PowerQUICC Serial

  Internet address is 10.0.1.1/24

  MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

     reliability 239/255, txload 1/255, rxload 1/255

  Encapsulation PPP

  loopback not set

  Keepalive set (10 sec)

  LCP Open

  Open: IPCP, CDPCP

[output cut]

The first line of output is important because it tells us that Serial 0/0 is up/up. Notice that the interface encapsulation is PPP and that LCP is open. This means that it has negotiated the session establishment and all is well. The last line tells us that NCP is listening for the protocols IP and CDP, shown with the NCP headers IPCP and CDPCP.

But what would you see if everything isn't so perfect? I'm going to type in the configuration shown in Figure 4.4 to find out.

Illustration shows two routers connected to each other via point-to-point serial connection, representing failed point-to-point protocol authentication.
Figure 4.4 Failed PPP authentication

What's wrong here? Take a look at the usernames and passwords. Do you see the problem now? That's right, the C is capitalized on the Pod1R2 username command found in the configuration of router Pod1R1. This is wrong because the usernames and passwords are case sensitive. Now let's take a look at the show interface command and see what happens:

Pod1R1#sh int s0/0

Serial0/0 is up, line protocol is down

  Hardware is PowerQUICC Serial

  Internet address is 10.0.1.1/24

  MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

     reliability 243/255, txload 1/255, rxload 1/255

  Encapsulation PPP, loopback not set

  Keepalive set (10 sec)

  LCP Closed

  Closed: IPCP, CDPCP

First, notice that the first line of output shows us that Serial0/0 is up and line protocol is down. This is because there are no keepalives coming from the remote router. The next thing I want you to notice is that the LCP and NCP are closed because the authentication failed.

Debugging PPP Authentication

To display the CHAP authentication process as it occurs between two routers in the network, just use the command debug ppp authentication.

If your PPP encapsulation and authentication are set up correctly on both routers and your usernames and passwords are all good, then the debug ppp authentication command will display an output that looks like the following output, which is called the three-way handshake:

d16h: Se0/0 PPP: Using default call direction

1d16h: Se0/0 PPP: Treating connection as a dedicated line

1d16h: Se0/0 CHAP: O CHALLENGE id 219 len 27 from "Pod1R1"

1d16h: Se0/0 CHAP: I CHALLENGE id 208 len 27 from "Pod1R2"

1d16h: Se0/0 CHAP: O RESPONSE id 208 len 27 from "Pod1R1"

1d16h: Se0/0 CHAP: I RESPONSE id 219 len 27 from "Pod1R2"

1d16h: Se0/0 CHAP: O SUCCESS id 219 len 4

1d16h: Se0/0 CHAP: I SUCCESS id 208 len 4

But if you have the password wrong as they were previously in the PPP authentication failure example back in Figure 4.4, the output would look something like this:

1d16h: Se0/0 PPP: Using default call direction

1d16h: Se0/0 PPP: Treating connection as a dedicated line

1d16h: %SYS-5-CONFIG_I: Configured from console by console

1d16h: Se0/0 CHAP: O CHALLENGE id 220 len 27 from "Pod1R1"

1d16h: Se0/0 CHAP: I CHALLENGE id 209 len 27 from "Pod1R2"

1d16h: Se0/0 CHAP: O RESPONSE id 209 len 27 from "Pod1R1"

1d16h: Se0/0 CHAP: I RESPONSE id 220 len 27 from "Pod1R2"

1d16h: Se0/0 CHAP: O FAILURE id 220 len 25 msg is "MD/DES compare failed"

PPP with CHAP authentication is a three-way authentication, and if the username and passwords aren't configured exactly the way they should be, then the authentication will fail and the link will go down.

Mismatched WAN Encapsulations

If you have a point-to-point link but the encapsulations aren't the same, the link will never come up. Figure 4.5 shows one link with PPP and one with HDLC.

Illustration shows two routers connected to each other via point-to-point serial connection, representing mismatched WAN encapsulations.
Figure 4.5 Mismatched WAN encapsulations

Look at router Pod1R1 in this output:

Pod1R1#sh int s0/0

Serial0/0 is up, line protocol is down

  Hardware is PowerQUICC Serial

  Internet address is 10.0.1.1/24

  MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

     reliability 254/255, txload 1/255, rxload 1/255

  Encapsulation PPP, loopback not set

  Keepalive set (10 sec)

  LCP REQsent

Closed: IPCP, CDPCP

The serial interface is up/down and LCP is sending requests but will never receive any responses because router Pod1R2 is using the HDLC encapsulation. To fix this problem, you would have to go to router Pod1R2 and configure the PPP encapsulation on the serial interface. One more thing: Even though the usernames are configured incorrectly, it doesn't matter because the command ppp authentication chap isn't used under the serial interface configuration. This means that the username command isn't relevant in this example.

You can set a Cisco serial interface back to the default of HDLC with the no encapsulation command like this:

Router(config)#int s0/0

Router(config-if)#no encapsulation

*Feb 7 16:00:18.678:%LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0/0, changed state to up

Notice the link came up because it now matches the encapsulation on the other end of the link!

Mismatched IP Addresses

A tricky problem to spot is if you have HDLC or PPP configured on your serial interface but your IP addresses are wrong. Things seem to be just fine because the interfaces will show that they are up. Take a look at Figure 4.6 and see if you can see what I mean—the two routers are connected with different subnets—router Pod1R1 with 10.0.1.1/24 and router Pod1R2 with 10.2.1.2/24.

Illustration shows two routers connected to each other via point-to-point serial connection, representing mismatched IP addresses.
Figure 4.6 Mismatched IP addresses

This will never work. Let's take a look at the output:

Pod1R1#sh int s0/0

Serial0/0 is up, line protocol is up

  Hardware is PowerQUICC Serial

  Internet address is 10.0.1.1/24

  MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

     reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation PPP, loopback not set

  Keepalive set (10 sec)

  LCP Open

  Open: IPCP, CDPCP

See that? The IP addresses between the routers are wrong but the link appears to be working just fine. This is because PPP, like HDLC and Frame Relay, is a layer 2 WAN encapsulation, so it doesn't care about IP addresses at all. So yes, the link is up, but you can't use IP across this link since it's misconfigured, or can you? Well, yes and no. If you try to ping, you'll see that this actually works! This is a feature of PPP, but not HDLC or Frame Relay. But just because you can ping to an IP address that's not in the same subnet doesn't mean your network traffic and routing protocols will work. So be careful with this issue, especially when troubleshooting PPP links!

Take a look at the routing table of Pod1R1 and see if you can find the mismatched IP address problem:

[output cut]

  10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks

C       10.2.1.2/32 is directly connected, Serial0/0

C       10.0.1.0/24 is directly connected, Serial0/0

Interesting! We can see our serial interface S0/0 address of 10.0.1.0/24, but what is that other address on interface S0/0— 10.2.1.2/32? That's our remote router's interface IP address! PPP determines and places the neighbor's IP address in the routing table as a connected interface, which then allows you to ping it even though it's actually configured on a separate IP subnet.

To find and fix this problem, you can also use the show running-config, show interfaces, or show ip interfaces brief command on each router, or you can use the show cdp neighbors detail command:

Pod1R1#sh cdp neighbors detail

-------------------------

Device ID: Pod1R2

Entry address(es):

  IP address: 10.2.1.2

Since the layer 1 Physical and layer 2 Data Link is up/up, you can view and verify the directly connected neighbor's IP address and then solve your problem.

Multilink PPP (MLP)

There are many load-balancing mechanisms available, but this one is free for use on serial WAN links! It provides multi-vendor support and is specified in RFC 1990, which details the fragmentation and packet sequencing specifications.

You can use MLP to connect your home network to an Internet service provider using two traditional modems or to connect a company via two leased lines.

The MLP feature provides a load-balancing functionality over multiple WAN links while allowing for multi-vendor interoperability. It offers support for packet fragmentation, proper sequencing, and load calculation on both inbound and outbound traffic.

MLP allows packets to be fragmented and then sent simultaneously over multiple point-to-point links to the same remote address. It can work over synchronous and asynchronous serial types.

MLP combines multiple physical links into a logical link called an MLP bundle, which is essentially a single, virtual interface that connects to the remote router. None of the links inside the bundle have any knowledge about the traffic on the other links.

The MLP over serial interfaces feature provides us with the following benefits:

Load balancing MLP provides bandwidth on demand, utilizing load balancing on up to 10 links, and can even calculate the load on traffic between specific sites. You don't actually need to make all links the same bandwidth, but doing so is recommended. Another key MLP advantage is that it splits packets and fragments across all links, which reduces latency across the WAN.

Increased redundancy This one is pretty straightforward. . . If a link fails, the others will still transmit and receive.

Link fragmentation and interleaving The fragmentation mechanism in MLP works by fragmenting large packets, then sending the packet fragments over the multiple point-to-point links. Smaller real-time packets are not fragmented. So interleaving basically means real-time packets can be sent in between sending the fragmented, non-real-time packets, which helps reduce delay on the lines. So let's configure MLP now to get a good feel for how it actually works now.

Configuring MLP

We're going to use Figure 4.7 to demonstrate how to configure MLP between two routers.

Illustration shows configuration of MLP between two routers Corp and SF. It shows MLP 10.1.1.0slash24 between the two routers.
Figure 4.7 MLP between the Corp and SF routers

But first, I want you to study the configuration of the two serial interfaces on the Corp router that we're going to use for making our bundle:

Corp# show interfaces Serial0/0

Serial0/0 is up, line protocol is up

  Hardware is M4T

  Internet address is 172.16.10.1/30

  MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,

     reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation PPP, LCP Open

  Open: IPCP, CDPCP, crc 16, loopback not set

 

Corp# show interfaces Serial1/1

Serial1/1 is up, line protocol is up

  Hardware is M4T

  Internet address is 172.16.10.9/30

  MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,

     reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation PPP, LCP Open

  Open: IPCP, CDPCP, crc 16, loopback not set

Did you notice that each serial connection is on a different subnet (they have to be) and that the encapsulation is PPP?

When you configure MLP, you must first remove your IP addresses off your physical interface. Then, you configure a multilink bundle by creating a multilink interface on both sides of the link. After that, you assign an IP address to this multilink interface, which effectively restricts a physical link so that it can only join the designated multilink group interface.

So first I'm going to remove the IP addresses from the physical interfaces that I'm going to include in my PPP bundle.

Corp# config t

Corp(config)# int Serial0/0

Corp(config-if)# no ip address

Corp(config-if)# int Serial1/1

Corp(config-if)# no ip address

Corp(config-if)# end

Corp#

 

SF# config t

SF(config)# int Serial0/0

SF(config-if)# no ip address

SF(config-if)# int Serial0/1

SF(config-if)# no ip address

SF(config-if)# end

SF#

Now we create the multilink interface on each side of the link and the MLP commands to enable the bundle.

Corp#config t

Corp(config)# interface Multilink1

Corp(config-if)# ip address 10.1.1.1 255.255.255.0

Corp(config-if)# ppp multilink

Corp(config-if)# ppp multilink group 1

Corp(config-if)# end

 

SF#config t

SF(config)# interface Multilink1

SF(config-if)# ip address 10.1.1.2 255.255.255.0

SF(config-if)# ppp multilink

SF(config-if)# ppp multilink group 1

SF(config-if)# exit

We can see that a link joins an MLP bundle only if it negotiates to use the bundle when a connection is established and the identification information that has been exchanged matches the info for an existing bundle.

When you configure the ppp multilink group command on a link, that link won't be allowed to join any bundle other than the indicated group interface.

Verifying MLP

To verify that your bundle is up and running, just use the show ppp multilink and show interfaces multilink1 commands:

Corp# show ppp multilink

 

Multilink1

  Bundle name: Corp

  Remote Endpoint Discriminator: [1] SF

  Local Endpoint Discriminator: [1] Corp

  Bundle up for 02:12:05, total bandwidth 4188, load 1/255

  Receive buffer limit 24000 bytes, frag timeout 1000 ms

    0/0 fragments/bytes in reassembly list

    0 lost fragments, 53 reordered

    0/0 discarded fragments/bytes, 0 lost received

    0x56E received sequence, 0x572 sent sequence

  Member links: 2 active, 0 inactive (max 255, min not set)

    Se0/1, since 01:32:05

    Se1/2, since 01:31:31

No inactive multilink interfaces

We can see that the physical interfaces, Se0/1 and Se1/2, are members of the logical interface bundle Multilink 1. So now we'll verify the status of the interface Multilink1 on the Corp router:

Corp# show int Multilink1

Multilink1 is up, line protocol is up

  Hardware is multilink group interface

  Internet address is 10.1.1.1/24

  MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,

     reliability 255/255, txload 1/255, rxload 1/255

Encapsulation PPP, LCP Open, multilink Open

Open: IPCP, CDPCP, loopback not set

 Keepalive set (10 sec)

[output cut]

Exam Essentials

Remember the default serial encapsulation on Cisco routers. Cisco routers use a proprietary High-Level Data Link Control (HDLC) encapsulation on all their serial links by default.

Remember the PPP Data Link layer protocols. The three Data Link layer protocols are Network Control Protocol (NCP), which defines the Network layer protocols; Link Control Protocol (LCP), a method of establishing, configuring, maintaining, and terminating the point-to-point connection; and High-Level Data Link Control (HDLC), the MAC layer protocol that encapsulates the packets.

Be able to troubleshoot a PPP link. Understand that a PPP link between two routers will show up and a ping would even work between the router if the layer 3 addresses are wrong.

Configure, verify, and troubleshoot PPPoE client-side interfaces using local authentication

Used with ADSL services, PPPoE (Point-to-Point Protocol over Ethernet) encapsulates PPP frames in Ethernet frames and uses common PPP features like authentication, encryption, and compression. But as I said earlier, it can be trouble. This is especially true if you've got a badly configured firewall!

Basically, PPPoE is a tunneling protocol that layers IP and other protocols running over PPP with the attributes of a PPP link. This is done so protocols can then be used to contact other Ethernet devices and initiate a point-to-point connection to transport IP packets.

Figure 4.8 displays typical usage of PPPoE over ADSL. As you can see, a PPP session is connected from the PC of the end user to the router. Subsequently, the subscriber PC IP address is assigned by the router via IPCP.

Illustration shows the usage of point-to-point protocol over Ethernet, with a point-to-point protocol session connected from the PC to the router.
Figure 4.8 PPPoE with ADSL

Your ISP will typically provide you with a DSL line and this will act as a bridge if your line doesn't provide enhanced features. This means only one host will connect using PPPoE. You can run the PPPoE client IOS feature on a Cisco router, which will connect multiple PCs on the Ethernet segment that is connected to the router.

Configuring a PPPoE Client

The PPPoE client configuration is simple and straightforward. First, you need to create a dialer interface and then tie it to a physical interface.

Here are the easy steps:

  1. Create a dialer interface using the interface dialer number command.
  2. Instruct the client to use an IP address provided by the PPPoE server with the ip address negotiated command.
  3. Set the encapsulation type to PPP.
  4. Configure the dialer pool and number.
  5. Under the physical interface, use the pppoe-client dial-pool number number command.

On your PPPoE client router, enter the following commands:

R1# conf t

R1(config)# int dialer1

R1(config-if)# ip address negotiated

R1(config-if)# encapsulation ppp

R1(config-if)# dialer pool 1

R1(config-if)# interface f0/1

R1(config-if)# no ip address

R1(config-if)# pppoe-client dial-pool-number 1

*May 1 1:09:07.540: %DIALER-6-BIND: Interface Vi2 bound to profile Di1

*May 1 1:09:07.541: %LINK-3-UPDOWN: Interface Virtual-Access2, changed state to up

That's it! Now let's verify the interface with the show ip interface brief and the show pppoe session commands:

R1# show ip int brief

Interface                 IP-Address     OK? Method Status            Protocol

FastEthernet0/1           unassigned     YES manual up                    up

<output cut>

Dialer1                    10.10.10.3    YES IPCP   up                    up

Loopback0                  192.168.1.1   YES NVRAM  up                    up

Loopback1                  172.16.1.1    YES NVRAM  up                    up

Virtual-Access1            unassigned    YES unset  up                    up

Virtual-Access2            unassigned    YES unset  up                    up

 

R1#show pppoe session

     1 client session

 

Uniq ID  PPPoE  RemMAC          Port                    VT  VA         State

           SID  LocMAC                                      VA-st      Type

    N/A      4  aacb.cc00.1419  FEt0/1                   Di1 Vi2        UP

                aacb.cc00.1f01                              UP

Exam Essentials

Be able to configure a PPPoE link. The high-level steps include creating a dialer interface and then tieing it to a physical interface.

Configure, verify, and troubleshoot GRE tunnel connectivity

Generic Routing Encapsulation (GRE) is a tunneling protocol that can encapsulate many protocols inside IP tunnels. Some examples would be routing protocols such as EIGRP and OSPF and the routed protocol IPv6. Figure 4.9 shows the different pieces of a GRE header.

Illustration shows the structure of a generic routing encapsulation tunnel, with passenger protocol, carrier protocol, and transport delivery protocol.
Figure 4.9 Generic Routing Encapsulation (GRE) tunnel structure

A GRE tunnel interface supports a header for each of the following:

  • A passenger protocol or encapsulated protocols like IP or IPv6, which is the protocol being encapsulated by GRE
  • GRE encapsulation protocol
  • A Transport delivery protocol, typically IP

GRE tunnels have the following characteristics:

  • GRE uses a protocol-type field in the GRE header so any layer 3 protocol can be used through the tunnel.
  • GRE is stateless and has no flow control.
  • GRE offers no security.
  • GRE creates additional overhead for tunneled packets—at least 24 bytes.

Configuring GRE Tunnels

Before you attempt to configure a GRE tunnel, you need to create an implementation plan. Here's a checklist for what you need to configure and implement a GRE:

  1. Use IP addressing.
  2. Create the logical tunnel interfaces.
  3. Specify that you're using GRE tunnel mode under the tunnel interface (this is optional since this is the default tunnel mode).
  4. Specify the tunnel source and destination IP addresses.
  5. Configure an IP address for the tunnel interface.

Let's take a look at how to bring up a simple GRE tunnel. Figure 4.10 shows the network with two routers.

Illustration shows networking with two routers, using generic routing encapsulation configuration.
Figure 4.10 Example of GRE configuration

First, we need to make the logical tunnel with the interface tunnel number command. We can use any number up to 2.14 billion.

Corp(config)#int s0/0/0

Corp(config-if)#ip address 63.1.1.1 255.255.255.252

Corp(config)#int tunnel ?

  <0-2147483647>  Tunnel interface number

Corp(config)#int tunnel 0

*Jan 5 16:58:22.719:%LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel0, changed state to down

Once we have configured our interface and created the logical tunnel, we need to configure the mode and then the transport protocol.

Corp(config-if)#tunnel mode ?

  aurp    AURP TunnelTalk AppleTalk encapsulation

  cayman  Cayman TunnelTalk AppleTalk encapsulation

  dvmrp   DVMRP multicast tunnel

  eon     EON compatible CLNS tunnel

  gre     generic route encapsulation protocol

  ipip    IP over IP encapsulation

  ipsec   IPSec tunnel encapsulation

  iptalk  Apple IPTalk encapsulation

  ipv6    Generic packet tunneling in IPv6

  ipv6ip  IPv6 over IP encapsulation

  nos     IP over IP encapsulation (KA9Q/NOS compatible)

  rbscp   RBSCP in IP tunnel

Corp(config-if)#tunnel mode gre ?

  ip          over IP

  ipv6        over IPv6

  multipoint  over IP (multipoint)

 

Corp(config-if)#tunnel mode gre ip

Now that we've created the tunnel interface, the type, and the transport protocol, we must configure our IP addresses for use inside of the tunnel. Of course, you need to use your actual physical interface IP for the tunnel to send traffic across the Internet, but you also need to configure the tunnel source and tunnel destination addresses.

Corp(config-if)#ip address 192.168.10.1 255.255.255.0

Corp(config-if)#tunnel source 63.1.1.1

Corp(config-if)#tunnel destination 63.1.1.2

 

Corp#sho run interface tunnel 0

Building configuration...

 

Current configuration : 117 bytes

!

interface Tunnel0

 ip address 192.168.10.1 255.255.255.0

 tunnel source 63.1.1.1

 tunnel destination 63.1.1.2

end

Now let's configure the other end of the serial link and watch the tunnel pop up!

SF(config)#int s0/0/0

SF(config-if)#ip address 63.1.1.2 255.255.255.252

SF(config-if)#int t0

SF(config-if)#ip address 192.168.10.2 255.255.255.0

SF(config-if)#tunnel source 63.1.1.2

SF(config-if)#tun destination 63.1.1.1

*May 19 22:46:37.099: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel0, changed state to up

Oops—did I forget to set my tunnel mode and transport to GRE and IP on the SF router? No, I didn't need to because it's the default tunnel mode on Cisco IOS. Nice! So, first I set the physical interface IP address (which used a global address even though I didn't have to), then I created the tunnel interface and set the IP address of the tunnel interface. It's really important that you remember to configure the tunnel interface with the actual source and destination IP addresses to use or the tunnel won't come up. In my example, the 63.1.1.2 was the source and 63.1.1.1 was the destination.

Verifying GRP Tunnels

As usual I'll start with my favorite troubleshooting command, show ip interface brief.

Corp#sh ip int brief

Interface        IP-Address      OK? Method Status                Protocol

FastEthernet0/0  10.10.10.5      YES manual up                    up

Serial0/0        63.1.1.1        YES manual up                    up

FastEthernet0/1  unassigned      YES unset  administratively down down

Serial0/1        unassigned      YES unset  administratively down down

Tunnel0          192.168.10.1    YES manual up                    up

In this output, you can see that the tunnel interface is now showing as an interface on my router. You can see the IP address of the tunnel interface, and the Physical and Data Link status show as up/up. So far so good. Let's take a look at the interface with the show interfaces tunnel 0 command.

Corp#sh int tun 0

Tunnel0 is up, line protocol is up

  Hardware is Tunnel

  Internet address is 192.168.10.1/24

  MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,

     reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation TUNNEL, loopback not set

  Keepalive not set

  Tunnel source 63.1.1.1, destination 63.1.1.2

  Tunnel protocol/transport GRE/IP

    Key disabled, sequencing disabled

    Checksumming of packets disabled

  Tunnel TTL 255

  Fast tunneling enabled

  Tunnel transmit bandwidth 8000 (kbps)

  Tunnel receive bandwidth 8000 (kbps)

The show interfaces command shows the configuration settings and the interface status as well as the IP address, tunnel source, and destination address. The output also shows the tunnel protocol, which is GRE/IP. Last, let's take a look at the routing table with the show ip route command.

Corp#sh ip route

[output cut]

     192.168.10.0/24 is subnetted, 2 subnets

C      192.168.10.0/24 is directly connected, Tunnel0

L      192.168.10.1/32 is directly connected, Tunnel0

     63.0.0.0/30 is subnetted, 2 subnets

C      63.1.1.0 is directly connected, Serial0/0

L      63.1.1.1/32 is directly connected, Serial0/0

The tunnel0 interface shows up as a directly connected interface, and although it's a logical interface, the router treats it as a physical interface, just like serial 0/0 in the routing table.

Corp#ping 192.168.10.2

 

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 192.168.10.2, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5)

Did you notice that I just pinged 192.168.10.2 across the Internet? I hope so! Anyway, there's one last thing I want to cover before we move on to EBGP, and that's troubleshooting an output, which is showing a tunnel routing error. If you configure your GRE tunnel and receive this GRE flapping message:

          Line protocol on Interface Tunnel0, changed state to up

07:11:55: %TUN-5-RECURDOWN:

          Tunnel0 temporarily disabled due to recursive routing

07:11:59: %LINEPROTO-5-UPDOWN:

          Line protocol on Interface Tunnel0, changed state to down

07:12:59: %LINEPROTO-5-UPDOWN:

it means that you've misconfigured your tunnel, which will cause your router to try and route to the tunnel destination address using the tunnel interface itself!

Exam Essentials

Understand how to configure and verify a GRE tunnel. To configure GRE, first configure the logical tunnel with the interface tunnel number command. Configure the mode and transport, if needed, with the tunnel mode mode protocol command, then configure the IP addresses on the tunnel interfaces, the tunnel source and tunnel destination addresses, and your physical interfaces with global addresses. Verify with the show interface tunnel command as well as the Ping protocol.

Describe WAN topology options

A physical topology describes the physical layout of the network, in contrast to logical topologies, which describe the path a signal takes through the physical topology. There are four basic topologies for a WAN design.

Point-to-Point

A point-to-point connection is one in which there are single devices, locations, or connections at either end. Point -to-point connections can be used as building blocks for more involved topologies such as the hub and spoke and the full mesh, described in the next two sections.

Hub and Spoke

This topology features a single hub (central router) that provides access from remote networks to a core router. Figure 4.11 illustrates a hub-and-spoke topology.

Image described by caption and surrounding text.
Figure 4.11 Hub and spoke

All communication among the networks travels through the core router. The advantages of a star physical topology are less cost and easier administration, but the disadvantages can be significant:

  • The central router (hub) represents a single point of failure.
  • The central router limits the overall performance for access to centralized resources. It is a single pipe that manages all traffic intended either for the centralized resources or for the other regional routers.

Full Mesh

In this topology, each routing node on the edge of a given packet-switching network has a direct path to every other node on the cloud. Figure 4.12 shows a fully meshed topology.

Image described by caption and surrounding text.
Figure 4.12 Fully meshed topology

This configuration clearly provides a high level of redundancy, but the costs are the highest. So a fully meshed topology really isn't viable in large packet-switched networks. Here are some issues you'll contend with using a fully meshed topology:

  • Many virtual circuits are required—one for every connection between routers, which brings up the cost.
  • Configuration is more complex for routers without multicast support in non-broadcast environments.

Partially Meshed Topology

This type of topology reduces the number of routers within a network that have direct connections to all other routers in the topology. Figure 4.13 depicts a partially meshed topology.

Unlike in the full mesh network, all routers are not connected to all other routers, but it still provides more redundancy than a typical hub-and-spoke design will. This is actually considered the most balanced design because it provides more virtual circuits, plus redundancy and performance.

Illustration shows a partially meshed topology, with few routers that are not directly connected to other routers in a network.
Figure 4.13 Partially meshed topology

Single- vs. Dual-Homed

When a single connection is used on one end of a WAN link, using a single network interface, it is called a single-homed connection. When an additional network interface is dedicated to the same WAN link, it is called a dual-homed connection. This is typically done for purposes of redundancy.

In many cases, this concept is applied to the organization's connection to its ISP. Taking this concept a step further, both single-homed and dual-homed connections can be duplicated, with one set of connections to one ISP and another set of connections to a different ISP, providing both link redundancy and ISP redundancy. When this is done with a dual-homed connection to each ISP, they are called dual-multihomed connections. If a single-homed connection is provided for each ISP, it is called dual-single-homed connection.

Exam Essentials

Remember the various types of serial WAN topologies. The serial WAN topologies that are most widely used are point-to-point, full mesh, and hub and spoke.

Describe WAN access connectivity options

You're probably aware that a WAN can use a number of different connection types available on the market today. Figure 4.14 shows the different WAN connection types that can be used to connect your LANs (made up of data terminal equipment, or DTE) together over the data communication equipment (DCE) network.

Illustration shows dedicated, circuit-switched, and packet-switched as different types of WAN connection.
Figure 4.14 WAN connection types

Let me explain the different WAN connection types in detail now:

Dedicated (leased lines) These are usually referred to as point-to-point or dedicated connections. A leased line is a pre-established WAN communications' path that goes from the CPE through the DCE switch, then over to the CPE of the remote site. The CPE enables DTE networks to communicate at any time with no cumbersome setup procedures to muddle through before transmitting data. When you've got plenty of cash, this is definitely the way to go because it uses synchronous serial lines up to 45 Mbps. HDLC and PPP encapsulations are frequently used on leased lines, and we went over those at the beginning of the chapter.

Circuit switching When you hear the term circuit switching, think phone call. The big advantage is cost; most plain old telephone service (POTS) and ISDN dial-up connections are not flat rate, which is their advantage over dedicated lines because you pay only for what you use, and you pay only when the call is established. No data can transfer before an end-to-end connection is established. Circuit switching uses dial-up modems or ISDN and is used for low-bandwidth data transfers. Okay, I know what you're thinking, “Modems? Did he say modems? Aren't those found only in museums now?” After all, with all the wireless technologies available, who would use a modem these days? Well, some people do have ISDN; it's still viable and there are a few who still use a modem now and then. And circuit switching can be used in some of the newer WAN technologies as well.

Packet switching This is a WAN switching method that allows you to share bandwidth with other companies to save money, just like a super old party line, where homes shared the same phone number and line to save money. Packet switching can be thought of as a network that's designed to look like a leased line yet it charges you less, like circuit switching does. As usual, you get what you pay for, and there's definitely a serious downside to this technology. If you need to transfer data constantly, well, just forget about this option and get a leased line instead! Packet switching will only really work for you if your data transfers are bursty, not continuous; think of a highway, where you can only go as fast as the traffic—packet switching is the same thing. Frame Relay and X.25 are packet-switching technologies with speeds that can range from 56 Kbps up to T3 (45 Mbps).

MPLS

MultiProtocol Label Switching (MPLS) is a data-carrying mechanism that emulates some properties of a circuit-switched network over a packet-switched network. MPLS is a switching mechanism that imposes labels (numbers) to packets and then uses them to forward the packets. The labels are assigned on the edge of the MPLS network, and forwarding inside the MPLS network is carried out solely based on the labels. The labels usually correspond to a path to layer 3 destination addresses, which is on par with IP destination-based routing. MPLS was designed to support the forwarding of protocols other than TCP/IP. Because of this, label switching within the network is achieved the same way irrespective of the layer 3 protocol. In larger networks, the result of MPLS labeling is that only the edge routers perform a routing lookup. All the core routers forward packets based on the labels, which makes forwarding the packets through the service provider network faster. This is a big reason most companies have replaced their Frame Relay networks with MPLS service today. Last, you can use Ethernet with MPLS to connect a WAN, and this is called Ethernet over MPLS, or EoMPLS.

Metro Ethernet

Metropolitan-area Ethernet is a metropolitan area network (MAN) that's based on Ethernet standards and can connect a customer to a larger network and the Internet. If available, businesses can use Metro Ethernet to connect their own offices together, which is another very cost-effective connection option. MPLS-based Metro Ethernet networks use MPLS in the ISP by providing an Ethernet or fiber cable to the customer as a connection. From the customer, it leaves the Ethernet cable, jumps onto MPLS, and then Ethernet again on the remote side. This is a smart and thrifty solution that's very popular if you can get it in your area.

Broadband PPPoE

Point-to-Point Protocol over Ethernet encapsulates PPP frames in Ethernet frames and is usually used in conjunction with xDSL services. It gives you a lot of the familiar PPP features like authentication, encryption, and compression, but there's a downside—it has a lower maximum transmission unit (MTU) than standard Ethernet does. If your firewall isn't solidly configured, this little factor can really give you some grief!

Still somewhat popular in the United States, PPPoE's main feature is that it adds a direct connection to Ethernet interfaces while also providing DSL support. It's often used by many hosts on a shared Ethernet interface for opening PPP sessions to various destinations via at least one bridging modem.

Internet VPN (DMVPN, Site-to-Site VPN, Client VPN)

I'd be pretty willing to bet you've heard the term VPN more than once before. Maybe you even know what one is, but just in case, a virtual private network (VPN) allows the creation of private networks across the Internet, enabling privacy and tunneling of non-TCP/IP protocols. VPNs are used daily to give remote users and disjointed networks connectivity over a public medium like the Internet instead of using more expensive permanent means.

No worries—VPNs aren't really that hard to understand. A VPN fits somewhere between a LAN and WAN, with the WAN often simulating a LAN link because your computer, on one LAN, connects to a different, remote LAN and uses its resources remotely. The key drawback to using VPNs is a big one—security! So the definition of connecting a LAN (or VLAN) to a WAN may sound the same as using a VPN, but a VPN is actually much more.

Here's the difference: A typical WAN connects two or more remote LANs together using a router and someone else's network, like, say, your Internet service provider's. Your local host and router see these networks as remote networks and not as local networks or local resources. This would be a WAN in its most general definition. A VPN actually makes your local host part of the remote network by using the WAN link that connects you to the remote LAN. The VPN will make your host appear as though it's actually local on the remote network. This means that we now have access to the remote LAN's resources, and that access is also very secure!

This may sound a lot like a VLAN definition, and really, the concept is the same: “Take my host and make it appear local to the remote resources.” Just remember this key distinction: For networks that are physically local, using VLANs is a good solution, but for physically remote networks that span a WAN, opt for using VPNs instead.

For a simple VPN example, let's use my home office in Boulder, Colorado. Here, I have my personal host, but I want it to appear as if it's on a LAN in my corporate office in Dallas, Texas, so I can get to my remote servers. VPN is the solution I would opt for to achieve my goal.

Figure 4.15 shows this example of my host using a VPN connection from Boulder to Dallas, which allows me to access the remote network services and servers as if my host were right there on the same VLAN as my servers.

Why is this so important? If you answered, “Because your servers in Dallas are secure, and only the hosts on the same VLAN are allowed to connect to them and use the resources of these servers,” you nailed it! A VPN allows me to connect to these resources by locally attaching to the VLAN through a VPN across the WAN. The other option is to open up my network and servers to everyone on the Internet or another WAN service, in which case my security goes “poof.” So clearly, it's imperative I have a VPN!

Illustration shows a host in Colorado using a VPN connection.
Figure 4.15 Example of using a VPN

DMVPN (Cisco Proprietary)

The Cisco Dynamic Multipoint Virtual Private Network (DMVPN) feature enables you to easily scale large and small IPsec VPNs. The Cisco DMVPN is Cisco's answer to allow a corporate office to connect to branch offices with low cost, easy configuration, and flexibility. DMVPN has one central router, such as a corporate router, which is referred to as the hub, and the branches are called spokes. So the corporate-to-branch connection is referred to as the hub-and-spoke interconnection. Also supported is the spoke-to-spoke design used for branch-to-branch interconnections. If you're thinking this design sounds eerily similar to your old Frame Relay network, you're right! The DMVPN features enables you to configure a single GRE tunnel interface and a single IPsec profile on the hub router to manage all spoke routers, which keeps the size of the configuration on the hub router basically the same even if you add more spoke routers to the network. DMVPN also allows spoke routers to dynamically create VPN tunnels between them as network data travels from one spoke to another.

Site to Site

Site-to-site VPNs, or intranet VPNs, allow a company to connect its remote sites to the corporate backbone securely over a public medium like the Internet instead of requiring more expensive WAN connections like Frame Relay.

Client VPN (Remote Access)

Remote-access VPNs allow remote users such as telecommuters to securely access the corporate network wherever and whenever they need to.

Extranet VPN

Extranet VPNs allow an organization's suppliers, partners, and customers to be connected to the corporate network in a limited way for business-to-business (B2B) communications.

You'd use an enterprise-managed VPN if your company manages its own VPNs, which happens to be a very popular way of providing this service. To get a picture of this, check out Figure 4.16.

Illustration shows business office partner, regional office, SOHO, mobile worker, and main site of Cisco using enterprise-managed VPN.
Figure 4.16 Enterprise-managed VPNs

Exam Essentials

Understand the term virtual private network. You need to understand why and how to use a VPN between two sites and the purpose that IPsec serves with VPNs.

Describe the VPN types. These include site to site, client or remote access, extranet, and Cisco Dynamic Multipoint Virtual Private Network (DMVPN).

Configure and verify single-homed branch connectivity using eBGP IPv4 (limited to peering and route advertisement using Network command only)

The Border Gateway Protocol (BGP) is perhaps one of the most well-known routing protocols in the world of networking. This is understandable because BGP is the routing ­protocol that powers the Internet and makes possible what we take for granted: connecting to remote systems on the other side of the country or planet. Because of its pervasive use, it's likely that each of us will have to deal with it at some point in our careers. So it's appropriate that we spend some time learning about BGP.

Configuring BGP

If you're configuring BGP between a customer network and an ISP, this process is called external BGP (EBGP). If you're configuring BGP peers between two routers in the same AS, it's not considered EBGP.

You must have the basic information to configure EBGP:

  • AS numbers (your own, and all remote AS numbers, which must be different)
  • All the neighbors (peers) that are involved in BGP, and IP addressing that is used among the BGP neighbors
  • Networks that need to be advertised into BGP

For an example of configuring EBGP, here's Figure 4.17.

Illustration shows a lay layout of an external border gateway protocol. It shows ISP router connected to R1 and R2 routers.
Figure 4.17 Example of EBGP lay layout

There are three main steps to configure basic BGP:

  1. Define the BGP process.
  2. Establish one or more neighbor relationships.
  3. Advertise the local networks into BGP.

Define the BGP Process

To start the BGP process on a router, use the router bgp AS command. Each process must be assigned a local AS number. There can only be one BGP process in a router, which means that each router can only be in one AS at any given time.

Here is an example:

      ISP#config t

      ISP(config)#router bgp ?

<1-65535> Autonomous system number

ISP(config)#router bgp 1

Notice the AS number can be from 1 to 65,535.

Establish One or More Neighbor Relationships

Since BGP does not automatically discover neighbors like other routing protocols do, you have to explicitly configure them using the neighbor peer-ip-address remote-as peer-as-number command. Here is an example of configuring the ISP router in Figure 4.17:

ISP(config-router)#neighbor 192.168.1.2 remote-as 100

ISP(config-router)#neighbor 192.168.2.2 remote-as 200

Be sure to understand that the above command is the neighbor's IP address and neighbor's AS number.

Advertise the Local Networks into BGP

To specify your local networks and advertise them into BGP, you use the network command with the mask keyword and then the subnet mask:

ISP(config-router)#network 10.0.0.0 mask 255.255.255.0

These network numbers must match what is found on the local router's forwarding table exactly, which can be seen with the show ip route or show ip int brief command. For other routing protocols, the network command has a different meaning. For OSPF and EIGRP, for example, the network command indicates the interfaces for which the routing protocol will send and receive route updates. In BGP, the network command indicates which routes should be injected into the BGP table on the local router.

Here's the BGP routing configuration for the R1 and R2 routers:

R1#config t

R1(config)#router bgp 100

R1(config-router)#neighbor 192.168.1.1 remote-as 1

R1(config-router)#network 10.0.1.0 mask 255.255.255.0

 

R2#config t

R2(config)#router bgp 200

R2(config-router)#neighbor 192.168.2.1 remote-as 1

R2(config-router)#network 10.0.2.0 mask 255.255.255.0

That's it! Pretty simple. Now let's verify our configuration.

Verifying EBGP

We'll use the following commands to verify our little EBGP network:

  • show ip bgp summary
  • show ip bgp
  • show ip bgp neighbors

The show ip bgp summary Command

The show ip bgp summary command gives you an overview of the BGP status. Each configured neighbor is listed in the output of the command. The output will display the IP address and AS number of the neighbor, along with the status of the session. You can use this information to verify that BGP sessions are up and established or to verify the IP address and AS number of the configured BGP neighbor.

ISP#sh ip bgp summary

BGP router identifier 10.0.0.1, local AS number 1

BGP table version is 4, main routing table version 6

3 network entries using 396 bytes of memory

3 path entries using 156 bytes of memory

2/2 BGP path/bestpath attribute entries using 368 bytes of memory

3 BGP AS-PATH entries using 72 bytes of memory

0 BGP route-map cache entries using 0 bytes of memory

0 BGP filter-list cache entries using 0 bytes of memory

Bitfield cache entries: current 1 (at peak 1) using 32 bytes of memory

BGP using 1024 total bytes of memory

BGP activity 3/0 prefixes, 3/0 paths, scan interval 60 secs

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down State/PfxRcd

192.168.1.2     4   100      56      55        4    0    0 00:53:33        4

192.168.2.2     4   200      47      46        4    0    0 00:44:53        4

The first section of the show ip bgp summary command output describes the BGP table and its content:

  • The router ID of the router and local AS number
  • The BGP table version is the version number of the local BGP table. This number is increased every time the table is changed.

The second section of the show ip bgp summary command output is a table in which the current neighbor statuses are shown. Here's information about what you see displayed in the output of this command:

  • IP address of the neighbor
  • BGP version number that is used by the router when communicating with the neighbor (v4)
  • AS number of the remote neighbor
  • Number of messages and updates that have been received from the neighbor since the session was established
  • Number of messages and updates that have been sent to the neighbor since the session was established
  • Version number of the local BGP table that has been included in the most recent update to the neighbor
  • Number of messages that are waiting to be processed in the incoming queue from this neighbor
  • Number of messages that are waiting in the outgoing queue for transmission to the neighbor
  • How long the neighbor has been in the current state and the name of the current state. Interestingly, notice there is no state listed, which is actually what you want because that means the peers are established.
  • Number of received prefixes from the neighbor
  • ISP1 has two established sessions with the following neighbors:
    • 192.168.1.2, which is the IP address of R1 router and is in AS 100
    • 192.168.2.2, which is the IP address of R2 router and is in AS 200
  • From each of the neighbors, ISP1 has received one prefix (one network).

Now, for the CCNA objectives, remember that if you see this type of output at the end of the show ip bgp summary command, the BGP session is not established between peers:

Neighbor       V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down State/PfxRcd

192.168.1.2    4   64       0       0        0    0    0  never    Active

Notice the state of Active. Remember, seeing no state output is good! Active means we're actively trying to establish with the peer.

The show ip bgp Command

With the show ip bgp command, the entire BGP table is displayed. A list of information about each route is displayed, so this is a nice command to get quick information on your BGP routes.

ISP#sh ip bgp

BGP table version is 4, local router ID is 10.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,

r RIB-failure, S Stale

Origin codes: i - IGP, e - EGP, ? - incomplete

 

Network Next Hop Metric LocPrf Weight Path

*> 10.0.0.0/24 0.0.0.0 0 0 32768 i

*> 10.0.1.0/24 192.168.1.2 0 0 0 100 i

*> 10.0.2.0/24 192.168.2.2 0 0 0 200 i

The output is sorted in network number order, and if the BGP table contains more than one route to the same network, the backup routes are displayed on separate lines. We don't have multiple routes, so none are shown.

The BGP path selection process selects one of the available routes to each of the networks as the best. This route is pointed out by the > character in the left column.

ISP1 has the following networks in the BGP table:

  • 10.0.0.0/24, which is locally originated via the network command in BGP on the ISP router
  • 10.0.1.0/24, which has been advertised from 192.168.1.2 (R1) neighbor
  • 10.0.2.0/24, which has been advertised from 192.168.2.2 (R2) neighbor

Since the command displays all routing information, note that network 10.0.0.0/24, with the next-hop attribute set to 0.0.0.0, is also displayed. The next-hop attribute is set to 0.0.0.0 when you view the BGP table on the router that originates the route in BGP. The 10.0.0.0/24 network is the network that I locally configured on ISP1 into BGP.

The show ip bgp neighbors Command

The show ip bgp neighbors command provides more information about BGP connections to neighbors than the show ip bgp command does. This command can be used to get information about the TCP sessions and the BGP parameters of the session, as well as showing the TCP timers and counters, and it's a long output! I'll just give you the top part of the command here:

ISP#sh ip bgp neighbors

BGP neighbor is 192.168.1.2, remote AS 100, external link

BGP version 4, remote router ID 10.0.1.1

BGP state = Established, up for 00:10:55

Last read 00:10:55, last write 00:10:55, hold time is 180, keepalive interval is 60 seconds

Neighbor capabilities:

Route refresh: advertised and received(new)

Address family IPv4 Unicast: advertised and received

Message statistics:

InQ depth is 0

OutQ depth is 0

[output cut]

Notice (and remember!) you can use the show ip bgp neighbors command to see the hold time on two BGP peers, and in the above example from the ISP to R1, the holdtime is 180 seconds.

Exam Essentials

Understand how to configure and verify EBGP. There are three main steps to configure basic BGP:

  1. Define the BGP process.
  2. Establish one or more neighbor relationships.
  3. Advertise the local networks into BGP.

Describe basic QoS concepts

Quality of service (QoS) refers to the way the resources are controlled so that the quality of services is maintained. It's basically the ability to provide a different priority to one or more types of traffic over other levels for different applications, data flows, or users so that they can be guaranteed a certain performance level. QoS is used to manage contention for network resources for better end-user experience.

QoS methods focus on problems that can affect data as it traverses network cable:

Delay Data can run into congested lines or take a less-than-ideal route to the destination, and delays like these can make some applications, such as VoIP, fail. This is the best reason to implement QoS when real-time applications are in use in the network—to prioritize delay-sensitive traffic.

Dropped packets Some routers will drop packets if they receive a packet while their buffers are full. If the receiving application is waiting for the packets but doesn't get them, it will usually request that the packets be retransmitted—another common cause of a service(s) delay. With QoS, when there is contention on a link, less important traffic is delayed or dropped in favor of delay-sensitive business-important traffic.

Error Packets can be corrupted in transit and arrive at the destination in an unacceptable format, again requiring retransmission and resulting in delays such as video and voice.

Jitter Not every packet takes the same route to the destination, so some will be more delayed than others if they travel through a slower or busier network connection. The variation in packet delay is called jitter, and this can have a nastily negative impact on programs that communicate in real time.

Out-of-Order Delivery Out-of-order delivery is also a result of packets taking different paths through the network to their destinations. The application at the receiving end needs to put them back together in the right order for the message to be completed. So if there are significant delays, or the packets are reassembled out of order, users will probably notice degradation of an application's quality.

QoS can ensure that applications with a required level of predictability will receive the necessary bandwidth to work properly. Clearly, on networks with excess bandwidth, this is not a factor, but the more limited your bandwidth is, the more important a concept like this becomes!

In the following sections, we'll be covering these important mechanisms:

  • Classification and marking tools
  • Policing, shaping, and re-marking tools
  • Congestion management (or scheduling) tools
  • Link-specific tools

So let's take a deeper look at each mechanism now.

Marking

A classifier is a tool that inspects packets within a field to identify the type of traffic they are carrying. This is so that QoS can determine which traffic class they belong to and determine how they should be treated. It's important that this isn't a constant cycle for traffic because it does take up time and resources. Traffic is then directed to a policy-enforcement mechanism, referred to as policing, for its specific type.

Policy enforcement mechanisms include marking, queuing, policing, and shaping, and there are various layer 2 and layer 3 fields in a frame and packet for marking traffic. You are definitely going to have to understand these marking techniques to meet the objectives, so here we go:

Class of Service (CoS) An Ethernet frame marking at layer 2, which contains 3 bits. This is called the Priority Code Point (PCP) within an Ethernet frame header when VLAN tagged frames as defined by IEEE 802.1q are used.

Type of Service (ToS) ToS comprises 8 bits, 3 of which are designated as the IP precedence field in an IPv4 packet header. The IPv6 header field is called Traffic Class.

Differentiated Services Code Point (DSCP or DiffServ) One of the methods we can use for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks is DSCP. This technology uses a 6-bit differentiated services code point in the 8-bit Differentiated Services field (DS field) in the IP header for packet classification. DSCP allows for the creation of traffic classes that can be used to assign priorities. While IP precedence is the old way to mark ToS, DSCP is the new way. DSCP is backward compatible with IP precedence.

Layer 3 packet marking with IP precedence and DSCP is the most widely deployed marking option because layer 3 packet markings have end-to-end significance.

Class Selector Class Selector uses the same 3 bits of the field as IP precedence and is used to indicate a 3-bit subset of DSCP values.

Traffic Identifier (TID) TID, used in wireless frames, describe a 3-bit field in the QoS control field in 802.11. It's very similar to CoS, so just remember CoS is wired Ethernet and TID is wireless.

Classification Marking Tools

As discussed in the previous section, classification of traffic determines which type of traffic the packets or frames belong to, which then allows you to apply policies to it by marking, shaping, and policing. Always try to mark traffic as close to the trust boundary as possible.

To classify traffic, we generally use three ways:

Markings This looks at the header informant on existing layer 2 or 3 settings, and classification is based on existing markings.

Addressing This classification technique looks at header information using source and destinations of interfaces, layer 2 and 3 addresses, and layer 4 port numbers. You can group traffic with devices using IP and traffic by type using port numbers.

Application signatures This is the way to look at the information in the payload, and this classification technique is called deep packet inspection.

Let's dive deeper into deep packet inspection by discussing something called Network Based Application Recognition (NBAR).

NBAR is a classifier that provides deep-packet inspection on layer 4 to 7 on a packet; however, know that using NBAR is the most CPU-intensive technique compared to using addresses (IP or ports) or access control lists (ACLs).

Since it's not always possible to identify applications by looking at just layers 3 and 4, NBAR looks deep into the packet payload and compares the payload content against its signature database called a Packet Description Language Model (PDLM).

There are two different modes of operation used with NBAR:

Passive mode Using passive mode will give you real-time statistics on applications by protocol or interface as well as the bit rate, packet, and byte counts.

Active mode Classifies applications for traffic marking so that QoS policies can be applied.

Device Trust

The trust boundary is a point in the network where packet markings (which identify traffic such as voice, video, or data) are not necessarily trusted. You can create, remove, or rewrite markings at that point. The borders of a trust domain are the network locations where packet markings are accepted and acted upon. Figure 4.18 shows some typical trust boundaries.

Illustration shows a trust boundary where IP phones and router interfaces are acted upon.
Figure 4.18 Trust boundaries

The figure shows that IP phones and router interfaces are typically trusted, but interfaces beyond those end points are not. Here are some things you need to remember for the exam objectives:

Untrusted domain This is the part of the network that you are not managing, such as PC, printers, etc.

Trusted domain This is part of the network with only administrator-managed devices such as switches, routers, etc.

Trust boundary This is where packets are classified and marked. For example, the trust boundary would be IP phones and the boundary between the ISP and enterprise network. In an enterprise campus network, the trust boundary is almost always at the edge switch.

Traffic at the trust boundary is classified and marked before being forwarded to the trusted domain. Markings on traffic coming from an untrusted domain are usually ignored to prevent end-user-controlled markings from taking unfair advantage of the network QoS configuration.

Prioritization

In today's networks, you will find a mix of data, voice, and video traffic. Each traffic type has different properties.

Figure 4.19 shows the traffic characteristics found in today's networks for data, voice, and video.

Figure shows data, voice, and video traffic, with its special characteristics.
Figure 4.19 Traffic characteristics

Voice

Voice traffic is real-time traffic with constant, predictable bandwidth and known packet arrival times.

The following are voice characteristics on a network:

  • Smooth traffic
  • Benign
  • Drop insensitive
  • Delay sensitive
  • UDP priority

One-way voice traffic needs the following:

  • Latency of less than or equal to 150 milliseconds
  • Jitter of less than or equal to 30 milliseconds
  • Loss of less than or equal to 1%
  • Bandwidth of only 30–128 Kbps

Video

There are several types of video traffic, and a lot of the traffic on the Internet today is video traffic, with Netflix, Hulu, etc. Video traffic can include streaming video, real-time interactive video, and video conferences.

One-way video traffic needs the following:

  • Latency of less than or equal to 200–400 milliseconds
  • Jitter of less than or equal to 30–50 milliseconds
  • Loss of less than or equal to 0.1%–1%
  • Bandwidth of 384 Kbps to 20 Mbps or greater

Data

Data traffic is not real-time traffic and includes data packets comprising bursty (or unpredictable) traffic and widely varying packet arrival times. The following are data characteristics on a network:

  • Smooth/bursty
  • Benign/greedy
  • Drop insensitive
  • Delay insensitive
  • TCP retransmits

Data traffic doesn't really require special handling in today's network, especially if TCP is used.

Shaping

Policers and shapers are two tools that identify and respond to traffic problems and are both rate limiters. Figure 4.20 shows how they differ.

Illustration shows the difference between policers and shapers as rate limiters. It shows reduced traffic rate after policing and shaping.
Figure 4.20 Policing and shaping rate limiters

Policers and shapers identify traffic violations in a similar manner, but they differ in their response:

Policers Since the policers make instant decision, you want to deploy them on the ingress if possible. This is because you want to drop traffic as soon as you receive it if it's going to be dropped anyway. Even so, you can still place them on an egress to control the amount of traffic per class. When traffic is exceeded, policers don't delay it, which means they do not introduce jitter or delay, they just check the traffic and can drop it or re-mark it. Just know that this means there's a higher drop probability, and it can cause a significant amount of TCP resends.

Shapers Shapers are usually deployed between an enterprise network, on the egress side, and the service provider network to make sure you stay within the carrier's contract rate. If the traffic does exceed the rate, it will get policed by the provider and dropped. This allows the traffic to meet the SLA and means there will be fewer TCP resends than with policers. Be aware that shaping does introduce jitter and delay.

Policing

Policing is covered in the previous section.

Congestion Management

This section and the next section on congestion avoidance will cover congestion issues. If traffic exceeds network resources (always), the traffic gets queued, which is basically the temporary storage of backed-up packets. You perform queuing in order to avoid dropping packets. This isn't a bad thing. It's actually a good thing or all traffic would immediately be dropped if packets couldn't get processed immediately. However, traffic classes like VoIP would actually be better off just being immediately dropped unless you can somehow guarantee delay-free bandwidth for that traffic.

When congestion occurs, the congestion management tools are activated. There are two types, as shown in Figure 4.21.

Figure shows queuing and scheduling in congestion management. It shows packets being assigned to different queues on congestion of output interface.
Figure 4.21 Congestion management

Let's take a closer look at congestion management:

Queuing (or buffering) Buffering is the logic of ordering packets in output buffers. It is activated only when congestion occurs. When queues fill up, packets can be reordered so that the higher-priority packets can be sent out of the exit interface sooner than the lower-priority ones.

Scheduling This is the process of deciding which packet should be sent out next and occurs whether or not there is congestion on the link.

Staying with scheduling for another minute, know that there are some schedule mechanisms that exist that you really need to be familiar with. We'll go over those, and then I'll head back over to a detailed look at queuing:

Strict priority scheduling Low-priority queues are only serviced once the high-priority queues are empty. This is great if you are the one sending high-priority traffic, but it's possible that low-priority queues will never be processed. We call this traffic or queue starvation.

Round-robin scheduling This is a rather fair technique because queues are serviced in a set sequence. You won't have starving queues here, but real-time traffic suffers greatly.

Weighted fair scheduling By weighing the queues, the scheduling process will service some queues more often than others, which is an upgrade over round-robin. You won't have any starvation here either, but unlike with round-robin, you can give priority to real-time traffic. It does not, however, provide bandwidth guarantees.

Okay, let's run back over and finish queueing. Queuing typically is a layer 3 process, but some queueing can occur at layer 2 or even layer 1. Interestingly, if a layer 2 queue fills up, the data can be pushed into layer 3 queues, and when layer 1 (called the transmit ring or TX-ring queue), fills up, the data will be pushed to layer 2 and 3 queues. This is when QoS becomes active on the device. There are many different queuing mechanisms, with only two typically used today, but let's take a look at the legacy queuing methods first:

First in, first out (FIFO) A single queue with packets being processed in the exact order in which they arrived.

Priority queuing (PQ) This is not really a good queuing method because lower-priority queues are served only when the higher-priority queues are empty. There are only four queues, and low-priority traffic may never be sent.

Custom queueing (CQ) With up to 16 queues and round-robin scheduling, CQ prevents low-level queue starvation and provides traffic guarantees. But it doesn't provide strict priority for real-time traffic, so your VoIP traffic could end up being dropped.

Weighted fair queuing (WFQ) This was actually a pretty popular way of queuing for a long time because it divided up the bandwidth by the number of flows, which provided bandwidth for all applications. This was great for real-time traffic, but it doesn't offer any guarantees for a particular flow.

Now that you know about all the not-so-good queuing methods to use, let's take a look at the two newer queuing mechanisms that are recommended for today's rich-media networks, detailed in Figure 4.22.

Figure shows class-based weighted fair queuing and low latency queuing as two types of queuing mechanisms.
Figure 4.22 Queuing mechanisms

The two new and improved queuing mechanisms you should now use in today's network are class-based weighted fair queuing and low latency queuing:

Class-based weighted fair queuing (CBWFQ) Provides fairness and bandwidth guarantees for all traffic, but it does not provide latency guarantees and is typically only used for data traffic management.

Low latency queuing (LLQ): LLQ is really the same thing as CBWFQ but with stricter priorities for real-time traffic. LLQ is great for both data and real-time traffic because it provides both latency and bandwidth guarantees.

In Figure 4.22, you can see the LLQ queuing mechanism, which is suitable for networks with real-time traffic. If you remove the low-latency queue (at the top), you're then left with CBWFQ, which is only used for data-traffic networks.

Tools for Congestion Avoidance

TCP changed our networking world when it introduced sliding windows as a flow-control mechanism in the mid-1990s. Flow control is a way for the receiving device to control the amount of traffic from a transmitting device.

If a problem occurred during a data transmission (always), the previous flow control methods used by TCP and other layer 4 protocols like SPX, which we used before sliding windows, would cut the transmission rate in half, and leave it there at the same rate, or lower, for the duration of the connection. This was certainly a point of contention with users!

TCP actually does cut transmission rates drastically if a flow control issue occurs, but it increases the transmission rate once the missing segments are resolved or the packets are finally processed. Because of this behavior, and although it was awesome at the time, this method can result in what we call tail drop. Tail drop is definitely suboptimal for today's networks because using it, we're not utilizing the bandwidth effectively.

Just to clarify, tail drop refers to the dropping of packets as they arrive when the queues on the receiving interface are full. This is a waste of precious bandwidth since TCP will just keep resending the data until it's happy again (meaning an ACK has been received). So now this brings up another new term, TCP global synchronization, where senders will reduce their transmission rate at the same time when packet loss occurs.

Congestion avoidance starts dropping packets before a queue fills, and it drops the packets by using traffic weights instead of just randomness. Cisco uses something called weighted random early detection (WRED), which is a queuing method that ensures that high-precedence traffic has lower loss rates than other traffic during congestion. This allows more important traffic, like VoIP, to be prioritized and dropped over what you'd consider less important traffic such as, for example, a connection to Facebook.

Figure 4.23 demonstrates how congestion avoidance works.

Figure shows the working of congestion avoidance. It shows three traffic flows start at different times and synchronizes in waves, resulting in 100 percent bandwidth utilization.
Figure 4.23 Congestion avoidance

If three traffic flows begin at different times, as shown in the example in Figure 4.23, and congestion occurs, using TCP could first cause tail drop, which drops the traffic as soon as it's received if the buffers are full. At that point TCP would start another traffic flow, synchronizing the TCP flows in waves, which would then leave much of the bandwidth unused.

Exam Essentials

Have a deep understanding of QoS. You must understand QoS in detail, specifically marking; device trust; prioritization for voice, video, and data; shaping; policing; and congestion management.

Review Questions

You can find the answers in the Appendix.

  1. Which command will display the CHAP authentication process as it occurs between two routers in the network?
    1. show chap authentication
    2. show interface serial 0
    3. debug ppp authentication
    4. debug chap authentication
  2. Which of the following are true regarding the following command? (Choose two.)
    1. R1(config-router)# neighbor 10.10.200.1 remote-as 6200
    2. The local router R1 uses AS 6200.
    3. The remote router uses AS 6200.
    4. The local interface of R1 is 10.10.200.1.
    5. The neighbor IP address is 10.10.200.1.
    6. The neighbor's loopback interface is 10.10.200.1.
  3. BGP uses which Transport layer protocol and port number?
    1. UDP/123
    2. TCP/123
    3. UDP/179
    4. TCP/179
    5. UDP/169
    6. TCP/169
  4. Which command can you use to know the hold time on the two BGP peers?
    1. show ip bgp
    2. show ip bgp summary
    3. show ip bgp all
    4. show ip bgp neighbor
  5. What does a next hop of 0.0.0.0 mean in the show ip bgp command output?

           Network          Next Hop            Metric LocPrf Weight Path

     *> 10.1.1.0/24      0.0.0.0                  0         32768 ?

     *> 10.13.13.0/24    0.0.0.0                  0         32768 ?

    1. The router does not know the next hop.
    2. The network is locally originated via the network command in BGP.
    3. It is not a valid network.
    4. The next hop is not reachable.
  6. Which two of the following are GRE characteristics? (Choose two.)
    1. GRE encapsulation uses a protocol-type field in the GRE header to support the encapsulation of any OSI layer 3 protocol.
    2. GRE itself is stateful. It includes flow-control mechanisms by default.
    3. GRE includes strong security mechanisms to protect its payload.
    4. The GRE header, together with the tunneling IP header, creates at least 24 bytes of additional overhead for tunneled packets.
  7. A GRE tunnel is flapping with the following error message:
       

    07:11:49: %LINEPROTO-5-UPDOWN:

              Line protocol on Interface Tunnel0, changed state to up

    07:11:55: %TUN-5-RECURDOWN:

              Tunnel0 temporarily disabled due to recursive routing

    07:11:59: %LINEPROTO-5-UPDOWN:

              Line protocol on Interface Tunnel0, changed state to down

    07:12:59: %LINEPROTO-5-UPDOWN:

    What could be the reason for the tunnel flapping?

    1. IP routing has not been enabled on the tunnel interface.
    2. There's an MTU issue on the tunnel interface.
    3. The router is trying to route to the tunnel destination address using the tunnel interface itself.
    4. An access list is blocking traffic on the tunnel interface.
  8. Which of the following commands will not tell you if the GRE tunnel 0 is in up/up state?
    1. show ip interface brief
    2. show interface tunnel 0
    3. show ip interface tunnel 0
    4. show run interface tunnel 0
  9. Which of the following PPP authentication protocols authenticates a device on the other end of a link with an encrypted password?
    1. MD5
    2. PAP
    3. CHAP
    4. DES
  10. Which of the following encapsulates PPP frames in Ethernet frames and uses common PPP features like authentication, encryption, and compression?
    1. PPP
    2. PPPoA
    3. PPPoE
    4. Token Ring
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset